Developments in world institutional rankings; SCImago joins the club

Editor’s note: this guest entry was kindly written by Gavin Moodie, principal policy adviser of Griffith University in Australia.  Gavin (pictured to the right) is most interested in the relations between vocational and higher education. His book From Vocational to Higher Education: An International Perspective was published by McGraw-Hill last year. Gavin’s entry sheds light on a new ranking initiative that needs to be situated within the broad wave of contemporary rankings – and bibliometrics more generally – that are being used to analyze, legitimize, critique, promote, not to mention extract revenue from.  Our thanks to Gavin for the illuminating contribution below.

~~~~~~~~~~~~~~~~~~~~~~~~

It has been a busy time for world institutional rankings watchers recently. Shanghai Jiao Tong University’s Institute of Higher Education published its academic ranking of world universities (ARWU) for 2009. The institute’s 2009 rankings include its by now familiar ranking of 500 institutions’ overall performance and the top 100 institutions in each of five broad fields: natural sciences and mathematics, engineering/technology and computer sciences, life and agriculture sciences, clinical medicine and pharmacy, and social sciences. This year Dr. Liu and his colleagues have added rankings of the top 100 institutions in each of five subjects: mathematics, physics, chemistry, computer science and economics/business.

Times Higher Education announced that over the next few months it will develop a new method for its world university rankings which in future will be produced with Thomson Reuters. Thomson Reuters’ contribution will be guided by Jonathan Adams (Adams’ firm, Evidence Ltd, was recently acquired by Thomson Reuters).

And a new ranking has been published, SCImago institutions rankings: 2009 world report. This is a league table of research institutions by various factors derived from Scopus, the database of the huge multinational publisher Elsevier. SCImago’s institutional research rank is distinctive in including with higher education institutions government research organisations such as France’s Centre National de la Recherche Scientifique, health organisations such as hospitals, and private and other organisations. Only higher education institutions are considered here. The ranking was produced by the SCImago Research Group, a Spain-based research network “dedicated to information analysis, representation and retrieval by means of visualisation techniques”.

SCImago’s rank is very useful in not cutting off at the top 200 or 500 universities, but in including all organisations with more than 100 publications indexed in Scopus in 2007. It therefore includes 1,527 higher education institutions in 83 countries. But even so, it is highly selective, including only 16% of the world’s estimated 9,760 universities, 76% of US doctoral granting universities, 65% of UK universities and 45% of Canada’s universities. In contrast all of New Zealand’s universities and 92% of Australia’s universities are listed in SCImago’s rank. Some 38 countries have seven or more universities in the rank.

SCImago derives five measures from the Scopus database: total outputs, cites per document (which are heavily influenced by field of research as well as research quality), international collaboration, normalised Scimago journal rank and normalised citations per output. This discussion will concentrate on total outputs and normalised citations per output.

Together these measures show that countries have been following two broad paths to supporting their research universities. One group of countries in northern continental Europe around Germany have supported a reasonably even development of their research universities, while another group of countries influenced by the UK and the US have developed their research universities much more unevenly. Both seem to be successful in support research volume and quality, at least as measured by publications and citations.

Volume of publications

Because a reasonable number of countries have several higher education institutions listed in SCImago’s rank it is possible to consider countries’ performance rather than concentrate on individual institutions as the smaller ranks encourage. I do this by taking the average of the performance of each country’s universities. The first measure of interest is the number of publications each university has indexed in Scopus over the five years from 2003 to 2007, which is an indicator of the volume of research. The graph in figure 1 shows the mean number of outputs for each country’s higher education research institutions. It shows only countries which have more than six universities included in SCImago’s rank, which leaves out 44 countries and thus much of the tail in institutions’ performance.

Figure 1: mean of universities’ outputs for each country with > 6 universities ranked


These data are given in table 1. The first column gives the number of higher education institutions each country has ranked in SCImago institutions rankings (SIR): 2009 world report. The second column shows the mean number of outputs indexed in Scopus for each country’s higher education research institutions from 2003 to 2007. The next column shows the standard deviation of the number of outputs for each country’s research university.

The third column in table 1 shows the coefficient of variation, which is the standard deviation divided by the mean and multiplied by 100. This is a measure of the evenness of the distribution of outputs amongst each country’s universities. Thus, the five countries whose universities had the highest average number of outputs indexed in Scopus from 2003 to 2007 – the Netherlands, Israel, Belgium, Denmark and Sweden – also had a reasonably low coefficient of variation below 80. This indicates that research volume is spread reasonably evenly amongst those countries’ universities. In contrast, Canada which had the sixth highest average number of outputs also has a reasonably high coefficient of variation of 120, indicating an uneven distribution of outputs amongst Canada’s research universities.

The final column in table 1 shows the mean of SCImago’s international collaboration score, which is a score of the proportions of the institution’s outputs jointly authored with someone from another country. The US’ international collaboration is rather low because US authors collaborate more often with authors in other institutions within the country.

Table 1: countries with > 6 institutions ranked by institutions’ mean outputs, 2007

Source: SCImago Research Group (2009) SCImago institutions rankings (SIR): 2009 world report.

Citations per paper by field

We next examine citations per paper by field of research, which is an indicator of the quality of research. This is the ratio between the average citations per publication of an institution and the world number of citations per publication over the same time frame and subject area. SCImago says it computed this ratio using the method established by Sweden’s Karolinska Intitutet which it called the ‘Item oriented field normalized citation score average’. A score of 0.8 means the institution is cited 20% below average and 1.3 means the institution is cited 30% above average.

Figure 2 shows mean normalised citations per paper for each country’s higher education research institutions from 2003 to 2007, again showing only countries which have more than six universities included in SCImago’s rank. The graph for an indicator of research quality in figure 2 is similar in shape to the graph of research volume in figure 1.

Figure 2: mean of universities’ normalised citations per paper for each country with > 6 universities ranked

Table 2 shows countries with more than six higher education research institutions ranked by their institutions’ mean normalised citations. This measure distinguishes more sharply between institutions than volume of outputs – the coefficient of variations for countries’ mean institutions normalised citations are higher than for number of publications. Nonetheless, several countries with high mean normalised citations have an even performance amongst their universities on this measure – Switzerland, Netherlands, Sweden, Germany, Austria, France, Finland and New Zealand.

Finally, I wondered whether countries which had a reasonably even performance of their research universities by volume and quality of publications reflected a more equal society. To test this I obtained from the Central Intelligence Agency’s (2009) World Factbook the Gini index of the distribution of family income within a country. A country with a Gini index of 0 would have perfect equality in the distribution of family income whereas a country with perfect inequality in its distribution of family would have a Gini index of 100. There is a modest correlation of 0.37 between a country’s Gini index and its coefficient of variation for both publications and citations.

Table 2: countries with > 6 institutions ranked by institutions’ normalised citations per output

Sources: SCImago Research Group (2009) SCImago institutions rankings (SIR): 2009 world report; Central Intelligence Agency (2009) The world factbook.

Conclusion

SCImago’s institutions research rank is sufficiently comprehensive to support comparisons between countries’ research higher education institutions. It finds two patterns amongst countries whose research universities have a high average volume and quality of research publications. One group of countries has a fairly even performance of their research universities, presumably because they have had fairly even levels of government support. This group is in northern continental Europe and includes Switzerland, Germany, Sweden, the Netherlands, Austria, Denmark and Finland. The other group of countries also has a high average volume and quality of research publications, but spread much more unevenly between universities. This group includes the US, the UK and Canada.

This finding is influenced by the measure I chose to examine countries’ performance, the average of their research universities’ performance. Other results may have been found using another measure of countries’ performance, such as the number of universities a country has in the top 100 or 500 of research universities normalised by gross domestic product. But such a measure would not reflect a country’s overall performance of their research universities, but only the performance of its champions. Whether one is interested in a country’s overall performance or just the performance of its champions depends on whether one believes more benefit is gained from a few outstanding performers or several excellent performers. That would usefully be the subject of another study.

Gavin Moodie

References

Central Intelligence Agency (2009) The world factbook (accessed 29 October 2009).

SCImago institutions rankings (SIR): 2009 world report (revised edition accessed 20 October 2009).

QS.com Asian University Rankings: niches within niches…within…

QS Asia 3Today, for the first time, the QS Intelligence Unit published their list of the top 100 Asian universities in their QS.com Asian University Rankings.

There is little doubt that the top performing universities have already added this latest branding to their websites, or that Hong Kong SAR will have proudly announced it has three universities in the top 5 while Japan has 2. QS Asia 2

QS.com Asian University Rankings is a spin-out from the QS World University Rankings published since 2005.  Last year, when the 2008 QS World University Rankings was launched, GlobalHigherEd posted an entry asking:  “Was this a niche industry in formation?”  This was in reference to strict copyright rules invoked – that ‘the list’ of decreasing ‘worldclassness’ could not be displayed, retransmitted, published or broadcast – as well as acknowledgment that rankings and associated activities can enable the building of firms such as QS Quacquarelli Symonds Ltd.

Seems like there are ‘niches within niches within….niches’ emerging in this game of deepening and extending the status economy in global higher education.  According to the QS Intelligence website:

Interest in rankings amongst Asian institutions is amongst the strongest in the world – leading to Asia being the first of a number of regional exercises QS plans to initiate.

The narrower the geographic focus of a ranking, the richer the available data can potentially be – the US News & World Report draws on 18 indicators, the Joong Ang Ilbo ranking in Korea on over 30. It is both appropriate and crucial then that the range of indicators used at a regional level differs from that used globally.

The objectives of each exercise are slightly different – whilst a global ranking seeks to identify truly world class universities, contributing to the global progress of science, society and scholarship, a regional ranking should adapt to the realities of the region in question.

Sure, the ‘regional niche’ allows QS.com to package and sell new products to Asian and other universities, as well as information to prospective students about who is regarded as ‘the best’.

However, the QS.com Asian University Rankings does more work than just that.  The ranking process and product places ‘Asian universities’ into direct competition with each other, it reinforces a very particular definition of ‘Asia’ and therefore Asian regionalism, and it services an imagined emerging Asian regional education space.

All this, whilst appearing to level the playing field by invoking regional sentiments.

Susan Robertson

Regional content expansion in Web of Science®: opening borders to exploration

jim-testaEditor’s note: this guest entry was written by James Testa, Senior Director, Editorial Development & Publisher Relations, Thomson Reuters. It was originally published on an internal Thomson Reuters website. James Testa (pictured to the left) joined Thomson Reuters (then ISI) in 1983. From 1983 through 1996 he managed the Publisher Relations Department and was directly responsible for building and maintaining working relations with the over three thousand international scholarly publishers whose journals are indexed by Thomson Reuters.  In 1996 Mr. Testa was appointed the Director of Editorial Development. In this position he directed a staff of information professionals in the evaluation and selection of journals and other publication formats for coverage in the various Thomson Reuters products. In 2007 he was named Senior Director, Editorial Development & Publisher Relations.  In this combined role he continues to build content for Thomson Reuters products and work to increase efficiency in communication with the international STM publishing community. He is a member of the American Society of Information Science and Technology (ASIST) and has spoken frequently on behalf of Thomson Reuters in the Asia Pacific region, South America, and Europe.

Our thanks also go to Susan Besaw of Thomson Reuters for facilitating access to the essay. This guest entry ties in to one of our earlier entries on this topic (‘Thomson Reuters, China, and ‘regional’ journals: of gifts and knowledge production’), as well as a fascinating new entry (‘The Canadian Center of Science and Education and Academic Nationalism’) posted on the consistently excellent Scott Sommers’ Taiwan Blog.

~~~~~~~~~~~~~~~~~~~~~

thomsonreuterslogoThomson Reuters extends the power of its Journal Selection Process by focusing on the world’s best regional journals. The goal of this initiative is to enrich the collection of important and influential international journals now covered in Web of Science with a number of superbly produced journals whose content is of specific regional importance.

Since its inception nearly fifty years ago by Eugene Garfield, PhD, the primary goal of the Journal Selection Process has been to identify those journals which formed the core literature of the sciences, social sciences, and arts & humanities. These journals publish the bulk of scholarly research, receive the most citations from the surrounding literature, and have the highest citation impact of all journals published today. The journals selected for the Web of Science are, in essence, the scholarly publications that meet the broadest research needs of the international community of researchers. They have been selected on the basis of their high publishing standards, their editorial content, the international diversity of their contributing authors and editorial board members, and on their relative citation frequency and impact. International journals selected for the Web of Science define the very highest standards in the world of scholarly publishing.

In recent years, however, the user community of the Web of Science has expanded gradually from what was once a concentration of major universities and research facilities in the United States and Western Europe to an internationally diverse group including virtually all major universities and research centers in every region of the world. Where once the Thomson Reuters sales force was concentrated in Philadelphia and London, local staff are now committed to the service of customers at offices in Japan, Singapore, Australia, Brazil, China, France, Germany, Taiwan, India, and South Korea.

webofknowledgeAs the global distribution of Web of Science expands into virtually every region on earth, the importance of regional scholarship to our emerging regional user community also grows. Our approach to regional scholarship effectively extends the scope of the Thomson Reuters Journal Selection Process beyond the collection of the great international journal literature: it now moves into the realm of the regional journal literature. Its renewed purpose is to identify, evaluate, and select those scholarly journals that target a regional rather than an international audience. Bringing the best of these regional titles into the Web of Science will illuminate regional studies that would otherwise not have been visible to the broader international community of researchers.

In the Fall of 2006, the Editorial Development Department of Thomson Reuters began this monumental task. Under the direction of Maureen Handel, Manager of Journal Selection, the team of subject editors compiled a list of over 10,000 scholarly publications representing all areas of science, social science, the arts, and humanities. Over the next twelve months the team was able to select 700 regional journals for coverage in the Web of Science.

The Web of Science Regional Journal Profile

These regional journals are typically published outside the US or UK. Their content often centers on topics of regional interest or that are presented with a regional perspective. Authors may be largely from the region rather than an internationally diverse group. Bibliographic information is in English with the exception of some arts and humanities publications that are by definition in native language (e.g. literature studies). Cited references must be in the Roman alphabet. All journals selected are publishing on time and are formally peer reviewed. Citation analysis may be applied but the real importance of the regional journal is measured by the specificity of its content rather than its citation impact.

Subject Areas and Their Characteristics

These first 700 journals selected in 2007 included 161 Social Science titles, 148 Clinical Medicine titles, 108 Agriculture/Biology/Environmental Science titles, 95 Physics/Chemistry/Earth Science titles, 89 Engineering/Computing/Technology titles, 61 Arts/Humanities titles, and 38 Life Sciences titles. The editors’ exploration of each subject area surfaced hidden treasure.

Social Sciences:
The European Union and Asia Pacific regions yielded over 140 social science titles. Subject areas such as business, economics, management, and education have been enriched with regional coverage. Several fine law journals have been selected and will provide balance in an area normally dominated by US journals. Because of the characteristically regional nature of many studies in the social sciences, this area will provide a rich source of coverage that would otherwise not be available to the broader international community.

Clinical Medicine:
Several regional journals dealing with General Medicine, Cardiology, and Orthopedics have been selected. Latin America, Asia Pacific, and European Union are all well represented here. Research in Surgery is a growing area in regional journals. Robotic and other novel surgical technology is no longer limited to the developed nations but now originates in China and India as well and has potential use internationally.

The spread of diseases such as bird flu and SARS eastward and westward from Southeast Asia is a high interest topic regionally and internationally. In some cases host countries develop defensive practices and, if enough time elapses, vaccines. Regional studies on these critical subjects will now be available in Web of Science.

Agriculture/Biology/Environmental Sciences:
Many of the selected regional titles in this area include new or endemic taxa of interest globally. Likewise regional agriculture or environmental issues are now known to result in global consequences. Many titles are devoted to niche topics such as polar/tundra environment issues, or tropical agronomy. Desertification has heightened the value of literature from central Asian countries. Iranian journals report voluminously on the use of native, desert tolerant plants and animals that may soon be in demand by desertification threatened countries.

Physics/Chemistry/Earth Sciences:
Regional journals focused on various aspects of Earth Science are now available in Web of Science. These include titles focused on geology, geography, oceanography, meteorology, climatology, paleontology, remote sensing, and geomorphology. Again, the inherently regional nature of these studies provides a unique view of the subject and brings forward studies heretofore hidden.

Engineering/Computing/Technology:
Engineering is a subject of global interest. Regional Journals in this area typically present subject matter as researched by regional authors for their local audience. Civil and Mechanical Engineering studies are well represented, providing solutions to engineering problems arising from local geological, social, environmental, climatological, or economic factors.

Arts & Humanities:
The already deep coverage of Arts & Humanities in Web of Science is now enhanced by additional regional publications focused on such subjects as History, Linguistics, Archaeology, and Religion. Journals from countries in the European Union, Latin American, Africa, and Asia Pacific regions are included.

Life Sciences:
Life Sciences subject areas lending themselves to regional studies include parasitology, micro-biology, and pharmacology. A specific example of valuable regional activity is stem cell research. The illegality of stem cell studies in an increasing number of developed countries has moved the research to various Asian countries where it is of great interest inside and outside of the region.

Conclusion

The primary mission of the Journal Selection Process is to identify, evaluate and select the top tier international and regional journals for coverage in the Web of Science. These are the journals that have the greatest potential to advance research on a given topic. In the pursuit of this goal Thomson Reuters has partnered with many publishers and societies worldwide in the development of their publications. As an important by-product of the steady application of the Journal Selection Process, Thomson Reuters is actively involved in raising the level of research communication as presented in journals. The objective standards described in the Journal Selection Process will now be focused directly on a new and expansive body of literature. Our hope, therefore, is not only to enrich the editorial content of Web of Science, but also to expand relations with the world’s primary publishers in the achievement of our mutual goal: more effective communication of scientific results to the communities we serve.

James Testa

Author’s note: This essay was compiled by James Testa, Senior Director, Editorial Development & Publisher Relations. Special thanks to Editorial Development staff members Maureen Handel, Mariana Boletta, Rodney Chonka, Lauren Gala, Anne Marie Hinds, Katherine Junkins-Baumgartner, Chang Liu, Kathleen Michael, Luisa Rojo, and, Nancy Thornton for their critical reading and comments.

European ambitions: towards a ‘multi-dimensional global university ranking’

Further to our recent entries on European reactions and activities in relationship to global rankings schemes:

and a forthcoming guest contribution to SHIFTmag: Europe Talks to Brussels, ranking(s) watchers should examine this new tender for a €1,100,000 (maximum) contract for the ‘Design and testing the feasibility of a Multi-dimensional Global University Ranking’, to be completed by 2011.

dgecThe Terms of Reference, which hs been issued by the European Commission, Directorate-General for Education and Culture, is particularly insightful, while this summary conveys the broad objectives of the initiative:

The new ranking to be designed and tested would aim to make it possible to compare and benchmark similar institutions within and outside the EU, both at the level of the institution as a whole and focusing on different study fields. This would help institutions to better position themselves and improve their development strategies, quality and performances. Accessible, transparent and comparable information will make it easier for stakeholders and, in particular, students to make informed choices between the different institutions and their programmes. Many existing rankings do not fulfil this purpose because they only focus on certain aspects of research and on entire institutions, rather than on individual programmes and disciplines. The project will cover all types of universities and other higher education institutions as well as research institutes.

The funding is derived out of the Lifelong Learning policy and program stream of the Commission.

Thus we see a shift, in Europe, towards the implementation of an alternative scheme to the two main global ranking schemes, supported by substantial state resources at a regional level. It will be interesting to see how this eventual scheme complements and/or overturns the other global ranking schemes that are products of media outlets, private firms, and Chinese universities.

Kris Olds

Message 1: ‘RAE2008 confirms UK’s dominant position in international research’

Like the launch of a spaceship at Cape Canaveral, the UK Research Assessment Exercise (RAE) is being prepared for full release.  The press release was loaded up 14 minutes ago (and is reprinted below).  Careers, and department futures, will be made and broken when the results emerge in 46 minutes.

Note how they frame the results ever so globally; indeed far more so than in previous RAEs.  I’ll be reporting back tomorrow when the results are out, and I’ve had a chance to unpack what “international” means, and also assess just how “international” the make-up of the review panels — both the main and sub-panels — is (or is not), and what types of international registers were taken into account when assessing ‘quality’. In short, can one self-proclaim a “dominant position” in the international research landscape, and if so on what basis? Leaving aside the intra-UK dynamics (and effects) at work here, this RAE is already turning out to be a mechanism to position a research nation within the global research landscape. But for what purpose?

RAE2008 confirms UK’s dominant position in international research

18 December 2008

The results of the 2008 Research Assessment Exercise (RAE2008) announced today confirm the dominant position that universities and colleges in the United Kingdom hold in international research.

RAE2008, which is based on expert review, includes the views of international experts in all the main subject areas. The results demonstrate that 54% of the research conducted by 52,400 staff submitted by 159 universities and colleges is either ‘world-leading’ (17 per cent in the highest grade) – or ‘internationally excellent’ (37 per cent in the second highest grade).

Taking the top three grades together (the third grade represents work of internationally recognised quality), 87% of the research activity is of international quality. Of the remaining research submitted, nearly all is of recognised national quality in terms of originality, significance and rigour.

Professor David Eastwood, Chief Executive of HEFCE, said:

“This represents an outstanding achievement, confirming that the UK is among the top rank of research powers in the world. The outcome shows more clearly than ever that there is excellent research to be found across the higher education sector. A total of 150 of the 159 institutions have some work of world-leading quality, while 49 have research of the highest quality in all of their submissions.

“The 2008 RAE has been a detailed, thorough and robust assessment of research quality. Producing quality profiles for each submission – rather than single-point ratings – has enabled the panels to exercise finer degrees of judgement. The assessment process has allowed them to take account of the full breadth of research quality, including inter-disciplinary, applied, basic and strategic research wherever it is located.

“Although we cannot make a direct comparison with the previous exercise carried out in 2001, we can be confident that the results are consistent with other benchmarks indicating that the UK holds second place globally to the US in significant subject fields. One of the most encouraging factors is that the panels reported very favourably on the high-quality work undertaken by early career researchers, which will help the UK to maintain this leading position in the future.”

John Denham, Secretary of State for Innovation, Universities and Skills, said:

“The latest RAE reinforces the UK’s position as a world leader in research and I congratulate our universities and colleges for achieving such outstanding results.

“The fact that over 50 per cent of research is either ‘world-leading or ‘internationally excellent’ further confirms that the UK continues to punch above its weight in this crucial field.

“To maintain global excellence during these challenging economic times it will be vital to continue to invest in research, this is why we have committed to fund almost £6bn in research and innovation in England by 2011.”

Key findings:

  • 54% of the research is either ‘world-leading’ (17% in 4*) – or ‘internationally excellent’ (37% in 3*)
  • 1,258 of the 2,363 submissions (53% of total) had at least 50% of their activity rated in the two highest grades. These submissions were found in 118 institutions
  • All the submissions from 16 institutions had at least 50% of their activity assessed as 3* or 4*
  • 84% of all submissions were judged to contain at least 5% world-leading quality research
  • 150 of the 159 higher education institutions (HEIs) that took part in RAE2008 demonstrated at least 5% world-leading quality research in one or more of their submissions
  • 49 HEIs have at least some world-leading quality research in all of their submissions.

The ratings scale, which was included in the press release, is pasted in below:

raescales

Kris Olds

International university rankings, classifications & mappings – a view from the European University Association

Source: European University Association Newsletter, No. 20, 5 December 2008.

Note: also see ‘Multi-scalar governance technologies vs recurring revenue: the dual logics of the rankings phenomenon

Multi-scalar governance technologies vs recurring revenue: the dual logics of the rankings phenomenon

Our most recent entry (‘University Systems Ranking (USR)’: an alternative ranking framework from EU think-tank‘) is getting heavy traffic these days, a sign that the rankings phenomenon just won’t go away.  Indeed there is every sign that debates about rankings will be heating up over the next 1-2 year in particular, courtesy of the desire of stakeholders to better understand rankings, generate ‘recurring revenue’ off of rankings, and provide new governance technologies to restructure higher education and research systems.

This said I continue to be struck, as I travel to selective parts of the world for work, by the diversity of scalar emphases at play.

eiffeleu1In France, for example, the broad discourse about rankings elevates the importance of the national (i.e., French) and regional (i.e., European) scales, and only then does the university scale (which I will refer to as the institutional scale in this entry) come into play in importance terms. This situation reflects the strong role of the national state in governing and funding France’s higher education system, and France’s role in European development debates (including, at the moment, presidency of the Council of the European Union).

In UK it is the disciplinary/field and then the institutional scales that matter most, with the institutional made up of a long list of ranked disciplines/fields. Once the new Research Assessment Exercise (RAE) comes out in late 2008 we will see the institutional assess the position of each of their disciplines/fields, which will then lead to more support or relatively rapid allocation of the hatchet at the disciplinary/field level. This is in part because much national government funding (via the Higher Education Funding Council for England (HEFCE), the Scottish Funding Council (SFC), the Higher Education Funding Council for Wales (HEFCW) and the Department for Employment and Learning, Northern Ireland (DEL)) to each university is structurally dependent upon the relative rankings of each university’s position in the RAE, which is the aggregate effect of the position of the array of fields/disciplines in any one university (see this list from the University of Manchester for an example). The UK is, of course, concerned about its relative place in the two main global ranking schemes, but it doing well at the moment so the scale of concern is of a lower order than most other countries (including all other European countries). Credit rating agencies also assess and factor in rankings with respect to UK universities (e.g. see ‘Passing judgment’: the role of credit rating agencies in the global governance of UK universities‘).

In the US – supposedly the most marketized of contexts – there is highly variably concern with rankings.  Disciplines/fields ranked by media outlets like U.S. News & World Report are concerned, to be sure, but U.S. News & World Report does not allocate funding. Even the National Research Council (NRC) rankings matter less in the USA given that its effects (assuming it eventually comes out following multiple delays) are more diffuse. The NRC rankings are taken note of by deans and other senior administrators, and also faculty, albeit selectively. Again, there is no higher education system in the US – there are systems. I’ve worked in Singapore, England and the US as a faculty member and the US is by far the least addled or concerned by ranking systems, for good and for bad.

While the diversity of ranking dispositions at the national and institutional levels is heterogeneous in nature, the global rankings landscape is continuing to change, and quickly. In the remainder of this entry we’ll profile but two dimensions of the changes.

Anglo-American media networks and recurrent revenue

ustheFirst, new key media networks, largely Anglo-American private sector networks, have become intertwined.  As Inside Higher Ed put it on 24 November:

U.S. News & World Report on Friday announced a new, worldwide set of university rankings — which is really a repackaging of the international rankings produced this year in the Times Higher Education-QS World University Rankings. In some cases, U.S. News is arranging the rankings in different ways, but Robert Morse, director of rankings at the magazine, said that all data and the methodology were straight from the Times Higher’s rankings project, which is affiliated with the British publication about higher education. Asked if his magazine was just paying for reprint rights, Morse declined to discuss financial arrangements. But he said that it made sense for the magazine to look beyond the United States. “There is worldwide competition for the best faculty, best students and best research grants and researchers,” he said. He also said that, in the future, U.S. News may be involved in the methodology. Lloyd Thacker, founder of the Education Conservancy and a leading critic of U.S. News rankings, said of the magazine’s latest project: “The expansion of a business model that has profited at the expense of education is not surprising. This could challenge leaders to distinguish American higher education by providing better indicators of quality and by helping us think beyond ranking.”

This is an unexpected initiative, in some ways, given that the Times Higher Education-QS World University Rankings are already available on line and US New and World Report is simply repackaging these for sale in the American market. Yet if you adopt a market-making perspective this joint venture makes perfect sense. Annual versions of the Times Higher Education-QS World University Rankings will be reprinted in a familiar (to US readers) format, thereby enabling London-based TSL Education Ltd., London/Paris/Singapore-based QS Quacquarelli Symonds, and Washington DC-based U.S. News and World Report to generate recurring revenue with little new effort (apart from repackaging and distribution in the US). The enabling mechanism is, in this case, reprint rights fees. As we have noted before, this is a niche industry in formation, indeed.

More European angst and action

And second, at the regional level, European angst (an issue we profiled on 6 July in ‘Euro angsts, insights and actions regarding global university ranking schemes‘) about the nature and impact of rankings is leading to the production of critical reports on rankings methodologies, the sponsorship of high powered multi-stakeholder workshops, and the emergence of new proposals for European ranking schemes.

ecjrccoverSee, for example, this newly released report on rankings titled Higher Education Rankings: Robustness Issues and Critical Assessment, which is published by the European Commission Joint Research Centre, Institute for the Protection and Security of the Citizen, Centre for Research on Lifelong Learning (CRELL)

The press release is here, and a detailed abstract of the report is below:

The Academic Ranking of World Universities carried out annually by the Shanghai’s Jiao Tong University (mostly known as the ‘Shanghai ranking’) has become, beyond the intention of its developers, a reference for scholars and policy makers in the field of higher education. For example Aghion and co-workers at the Bruegel think tank use the index – together with other data collected by Bruegel researchers – for analysis of how to reform Europe’s universities, while French President Sarkozy has stressed the need for French universities to consolidate in order to promote their ranking under Jiao Tong. Given the political importance of this field the preparation of a new university ranking system is being considered by the French ministry of education.

The questions addressed in the present analysis is whether the Jiao Tong ranking serves the purposes it is used for, and whether its immediate European alternative, the British THES, can do better.

Robustness analysis of the Jiao Tong and THES ranking carried out by JRC researchers, and of an ad hoc created Jiao Tong-THES hybrid, shows that both measures fail when it comes to assessing Europe’s universities. Jiao Tong is only robust in the identification of the top performers, on either side of the Atlantic, but quite unreliable on the ordering of all other institutes. Furthermore Jiao Tong focuses only on the research performance of universities, and hence is based on the strong assumption that research is a universal proxy for education. THES is a step in the right direction in that it includes some measure of education quality, but is otherwise fragile in its ranking, undeniably biased towards British institutes and somehow inconsistent in the relation between subjective variables (from surveys) and objective data (e.g. citations).

JRC analysis is based on 88 universities for which both the THES and Jiao Tong rank were available. European universities covered by the present study thus constitute only about 0.5% of the population of Europe’s universities. Yet the fact that we are unable to reliably rank even the best European universities (apart from the 5 at the top) is a strong call for a better system, whose need is made acute by today’s policy focus on the reform of higher education. For most European students, teachers or researchers not even the Shanghai ranking – taken at face value and leaving aside the reservations raised in the present study – would tell which university is best in their own country. This is a problem for Europe, committed to make its education more comparable, its students more mobile and its researchers part of a European Research Area.

Various attempts in EU countries to address the issue of assessing higher education performance are briefly reviewed in the present study, which offers elements of analysis of which measurement problem could be addressed at the EU scale. [my emphasis]

While ostensibly “European”, does it really matter that the Times Higher Education-QS World University Ranking is produced by firms with European headquarters, while the Jiao Tong ranking is produced by an institution based in China?

The divergent logics underlying the production of discourses about rankings are also clearly visible in two related statements. At the bottom of the European Commission’s Joint Research Centre report summarized above we see “Reproduction is authorised provided the source is acknowledged”, while the Times Higher Education-QS World University Rankings, a market-making discourse, is accompanied by a lengthy copyright warning that can be viewed here.

Yet do not, for a minute, think that ‘Europe’ does not want to be ranked, or use rankings, as much if not more than any Asian or American or Australian institution. At a disciplinary/field level, for example, debates are quickly unfolding about the European Reference Index for the Humanities (ERIH), a European Science Foundation (ESF) backed initiative that has its origins in deliberations about the role of the humanities in the European Research Area. The ESF frames it this way:

Humanities research in Europe is multifaceted and rich in lively national, linguistic and intellectual traditions. Much of Europe’s Humanities scholarship is known to be first rate. However, there are specifities of Humanities research, that can make it difficult to assess and compare with other sciences. Also,  it is not possible to accurately apply to the Humanities assessment tools used to evaluate other types of research. As the transnational mobility of researchers continues to increase, so too does the transdisciplinarity of contemporary science. Humanities researchers must position themselves in changing international contexts and need a tool that offers benchmarking. This is why ERIH (European Reference Index for the Humanities) aims initially to identify, and gain more visibility for top-quality European Humanities research published in academic journals in, potentially, all European languages. It is a fully peer-reviewed, Europe-wide process, in which 15 expert panels sift and aggregate input received from funding agencies, subject associations and specialist research centres across the continent. In addition to being a reference index of the top journals in 15 areas of the Humanities, across the continent and beyond, it is intended that ERIH will be extended to include book-form publications and non-traditional formats. It is also intended that ERIH will form the backbone of a fully-fledged research information system for the Humanities.

See here for a defense of this ranking system by Michael Worton (Vice-Provost, University College London, and a member of the ERIH steering committee).  I was particularly struck by this comment:

However, the aim of the ERIH is not to assess the quality of individual outputs but to assess dissemination and impact. It can therefore provide something that the RAE cannot: it can be used for aggregate benchmarking of national research systems to determine the international standing of research carried out in a particular discipline in a particular country.

Link here for a Google weblog search on this debate, while a recent Chronicle of Higher Education article (‘New Ratings of Humanities Journals Do More Than Rank — They Rankle’) is also worth reviewing.

Thus we see a new rankings initiative emerging to enable (in theory) Europe to better codify its highly developed humanities presence on the global research landscape, but in a way that will enable national (at the intra-European scale) peaks (and presumably) valleys of quality output to be mapped for the humanities, but also for specific disciplines/fields. Imagine the governance opportunities available, at multiple scales, if this scheme is operationalized.

And finally, at the European scale again, University World News noted, on 23 November, that:

The European Union is planning to launch its own international higher education rankings, with emphasis on helping students make informed choices about where to study and encouraging their mobility. Odile Quintin, the European Commission’s Director-General of Education and Culture, announced she would call for proposals before the end of the year, with the first classification appearing in 2010.

A European classification would probably be compiled along the same lines as the German Centre for Higher Education Development Excellence Ranking.

European actors are being spurred into such action by multiple forces, some internal (including the perceived need to ‘modernize European universities in the context of Lisbon and the European Research Area), some external (Shanghai Jiao Tong; Times Higher QS), and some of a global dimension (e.g., audit culture; competition for mobile students).

eurankingsprogThis latest push is also due to the French presidency of the Council of the European Union, as noted above, which is facilitating action at the regional and national scales. See, for example, details on a Paris-based conference titled ‘International comparison of education systems: a european model?’ which was held on 13-14 November 2008. As noted in the programme, the:

objective of the conference is to bring to the fore the strengths and weaknesses of the different international and European education systems, while highlighting the need for regular and objective assessment of the reforms undertaken by European Member States by means of appropriate indicators. It will notably assist in taking stock of:
– the current state and performance of the different European education systems:
– the ability of the different European education systems to curb the rate of failure in schools,
– the relative effectiveness of amounts spent on education by the different Member States.

The programme and list of speakers is worth perusing to acquire a sense of the broad agenda being put forward.

Multi-scalar governance vs (?) recurring revenue: the emerging dual logics of the rankings phenomenon

The rankings phenomenon is here to stay. But which logics will prevail, or at least emerge as the most important in shaping the extension of audit culture into the spheres of higher education and research?  At the moment it appears that the two main logics are:

  • Creating a new niche industry to form markets and generate recurrent revenue; and,
  • Creating new multi-scalar governance technologies to open up previously opaque higher education and research systems, so as to facilitate strategic restructuring for the knowledge economy.

These dual logics are in some ways contradictory, yet in other ways they are interdependent. This is a phenomenon that also has deep roots in the emerging centres of global higher ed and research calculation that are situated in London, Shanghai, New York, Brussels, and Washington DC.  And it is underpinned by the analytical cum revenue generating technologies provided by the Scientific division of Thomson Reuters, which develops and operates the ISI Web of Knowledge.

Market-making and governance enabling…and all unfolding before our very eyes. Yet do we really know enough about the nature of the unfolding process, including the present and absent voices, that seems to be bringing these logics to the fore?

Kris Olds

Times Higher Education – QS World University Rankings (2008): a niche industry in formation?

The new Times Higher Education – QS World University Rankings (2008) rankings were just released, and the copyright regulations deepen and extend, push and pull, enable and constrain.  Global rankings: a niche industry in formation?

Kris Olds

New 2008 Shanghai rankings, by rankers who also certify rankers

Benchmarking, and audit culture more generally, are clearly the issues of the week. Following our coverage of a new Standard and Poor’s credit rating report regarding UK universities (‘Passing judgment’: the role of credit rating agencies in the global governance of UK universities‘), the Chronicle of Higher Education just noted that the 2008 Academic Ranking of World Universities (ARWU) (published by Shanghai Jiao Tong University) has been released on the web.

We’ve had more than a few stories about the pros and cons of rankings (e.g., 19 November’s  ‘University rankings: deliberations and future directions‘), but, of course, curiosity killed the cat so I eagerly plunged in for a quick scan.

Leaving aside the individual university scale, one of the most interesting representations of the data they collected, suspect though it might be, is this one:

The geographies, especially the disciplinary/field geographies, are noteworthy on a number of levels. The results are sure to propel the French (currently holding the rotating presidency of the Council of the European Union) into further action re., the deconstruction of the Shanghai methodology, and the development of alternatives (see my reference to this issue in the 6 July entry titled ‘Euro angsts, insights and actions regarding global university ranking schemes’).

I’m also not sure we can rely upon the recently established IREG-International Observatory on Academic Ranking and Excellence to shed unbiased light on the validity of the above table, and all the rest that are sure to be circulated, at the speed of light, through the global higher ed world over the next month or more. Why? Well, the IREG-International Observatory on Academic Ranking and Excellence, established on 18 April 2008, is supposed to:

review the conduct of “academic ranking” and expressions of “academic excellence” for the benefit of higher education, its stake-holders and the general public. This objective will be achieved by way of:

  • improving the standards, theory and practice in line with recommendations formulated in the Berlin Principles on Ranking of Higher Education Institutions;
  • initiating research and training related to ranking excellence;
  • analyzing the impact of ranking on access, recruitment trends and practices;
  • analyzing the role of ranking on institutional behavior;
  • enhancing public awareness and understanding of academic work.

Answering the explicit request of ranking bodies, the Observatory will review and assess selected rankings, based on methodological criteria and deontological standards of the Berlin Principles on Ranking of Higher Education Institutions. Successful ranking will be entities to declare they are “IREG Recognized”.

Now, who established the IREG-International Observatory on Academic Ranking and Excellence? A variety of ‘experts’ (photo below), including people associated with said Shanghai rankings, as well as U.S. News & World Report.

Forgive me if I am wrong, but is it not illogical, best intentions aside, to have rankers themselves on boards of institutions that seek to review “the conduct of ‘academic ranking’ and expressions of ‘academic excellence’ for the benefit of higher education, its stake-holders and the general public”, while also handing out IREG Recognized certifications (including to themselves, I presume)?

Kris Olds

‘Passing judgment’: the role of credit rating agencies in the global governance of UK universities

This week, one of the two major credit rating agencies in the world, Standard & Poor’s (Moody’s is the other), issued their annual ‘Report Card’ on UK universities. This year’s version is titled UK Universities Enjoy Higher Revenues but Still Face Spending Pressures and it has received a fair bit of attention in media outlets (e.g., the Financial Times and The Guardian). Our thanks to Standard and Poor’s for sending us a copy of the report.

Five UK universities were in the spotlight after having their creditworthiness rated by Standard & Poor’s (S&P’s). In total, S&P’s assesses 20 universities in the UK (5 are made public, the rest are confidential), with 90% of this survey considered by the rating agency to be of high investment grade quality (of A- or above).

Universities in the UK, it would appear from S&P’s Report Card, have had a relatively good year from ‘a credit perspective’. This pronouncement is surely something to celebrate in a year when the word ‘credit crunch’ has become the new metaphor for economic meltdown, and when higher education institutions are likely to be worried about the affects of the sub-prime mortgage lending crisis on loans to students and institutions more generally.

But to the average lay person (or even the average university professor), with a generally low level of financial literacy, what does this all mean? Global ratings agencies passing judgments on UK universities, or policies to drive the sector more generally, or, finally, individual institutional governance decisions?

Three years ago, when one of us (Susan) was delivering an Inaugural Professorial Address at Bristol, S&P’s 2005 report on Bristol (AA/Stable/–) was flashed up, much to the amusement of the audience though to the bemusement of the Chair, a senior university leader. The mild embarrassment of the Chair was largely a consequence of the fact that he was unaware of this judgment on Bristol by a credit rating agency headquartered in New York.

Now the reason for showing S&P’s judgment on the University of Bristol was neither to amuse the audience nor to embarrass the Chair. The point at the time was to sketch out the changing landscape of globalizing education systems within the wider global political economy, to introduce some of the newer (and more private) players who increasingly wield policymaking/shaping power on the sector, to reflect on how these agencies work, and to delineate some of the emerging effects of such developments on the sector.

Our view is that current analyses of globalizing higher education have neglected the role of credit rating agencies in the governance of the higher education sector—as specialized forms of intelligence gathering, shaping and judgment determination on universities. Yet, credit rating agencies are, in many ways, at the heart of contemporary global governance. Witness, for example, the huge debates going on now about establishing a European register for ratings agencies.

The release, then, this week of the S&P’s UK Universities 2008 Report Card, is an opportunity for GlobalHigherEd to sketch out to interested readers a basic understanding of global rating agencies and their relationship to the global governance of higher education.

Rating agencies – origins

Timothy Sinclair, a University of Warwick academic, has been writing for more than a decade on rating agencies and their roles in what he calls the New Global Finance (NGF) (Sinclair, 2000). His various articles and books (see, for example, Sinclair 1994; 2000; 2003; 2005)—some of which are listed below—are worth reading for those of you who want to pursue the topic in greater depth.

Sinclair outlines the early development and subsequent growing importance of credit rating agencies—the masters of capital and second superpowers—arguing that there have been a number of distinct phases in their development.

The first phase dates back to the 1850s, when compendiums of information were produced for American financial markets about large industrial infrastructure developments, such as railroads and canals. However, it was not until the 1907 financial crisis that these early compendiums of information were then used to make judgements about the creditworthiness of debtors (Sinclair, 2003: 148).

‘Rating’ then entered a period of rapid growth from the mid-1930s onwards, as a result of state governments in the US incorporating rating standards into their prudential rules for investment by pension funds.

A third phase began in the 1980s, when new financial innovations (particularly low-rated or junk bonds) were developed, and cheaper offshore non-national money markets were created (that is, places where funds are raised by selling debt obligations and equity outside of the current constraints of government regulation).

However this process, of what Sinclair (1994: 136) calls the ‘disintermediation’ of financing (meaning state regulatory bodies are side-stepped), creates information problems for those wishing to lend money and those wishing to borrow it.

The current phase is now characterized by, on the one hand, greater internationalization of finance, and on the other hand hand, increased significance of capital markets that challenge the role of Banks, as intermediaries.

Credit rating agencies have, as a result, become more important as suppliers of the information with which to make credit-worthiness judgments.

New York-based rating agencies have grown rapidly since then, responding to innovations in financial instruments, on the one hand, and the need for information, on the other. Demand for information has also generated competition within the industry, with some firms operating niche specializations – for instance, as we see with Standards & Poor’s and the higher education sector, itself a subsidiary of publishers McGraw Hill,

Credit rating is big, big business. As Sinclair (2005) notes, the two major credit rating agencies, Moody’s and Standards & Poor’s, pass judgments on around a $30 trillion worth of securities each year. Ratings also affect rates or costs of borrowing, so that the higher the rating, the less risk of default on repayment to the lender and therefore the lower the cost to the borrower.

Universities with different credit ratings will, therefore, be differently placed to borrow – so that the adage of ‘the more you have the more you get’ becomes a major theme.

The rating process

If we look at the detail of the ‘issuer credit rating’ and ‘comments’ in the Report Card of, for instance, the University of Bristol, or King’s College London, we can see that detail is gathered on the financial rating of the issuer; on the industry, competitors, and economy; on legal advice related to the specific issue; on management, policy, business outlook, accounting practices and so on; and on the competitive position, quality of management, long term industry prospects, and wider economic environment. As Sinclair (2003: 150) notes:

The rating agencies are most interested in data on cash flow relative to debt service obligations. They want to know how liquid the company is, and where there will be timely problems likely to hinder repayment. Other information may include five-year financial projections, including income statements and balance sheets, analysis of capital spending plans, financing alternatives, and contingency plans. This information which may not be publicly known is supplemented by agency research into the value of current outstanding obligations, stock valuations and other publicly available data that allows for an inference…

The rating that follows – an opinion on creditworthiness—is generated by an analytical team, a report is prepared with the rating and rationale, this is put to the rating committee made up of senior officials, and a final determination is made in private. The decision is subject to appeal by the issuer. Issuer credit ratings can be either long or short term. S&P use the following nomenclature for long term issue credit ratings (see Bankers Almanac, 2008: 1- 3):

  • AAA – (highest/ extremely strong capacity to meet financial commitments
  • AA – very strong capacity to meet financial commitments
  • A – strong capacity to meet financial commitments, but susceptible to adverse affects of changes in circumstances and economic conditions
  • BBB – adequate capacity to meet financial commitments
  • BB – less vulnerable in the near term than other lower rated obligators, but faces major ongoing uncertainties
  • B – more vulnerable than BB – but adverse business, financial or economic conditions will likely impair obligator’s capacity to meet its financial commitments

Rating higher education institutions

In light of the above discussion, we can now look more closely at the kinds of judgments passed on those universities included in a typical Report Card on the sector by Standards & Poor’s (see 2008: 7).

The 2008 Report Card itself is short; a 9 page document which offers a ‘credit perspective’ on the sector more generally, and on 5 universities. We are told “the UK higher education sector has made positive strides over the past few years, but faces increasing risks in the medium-to-long term” (p. 2).

The Report goes on to note a trebling of tuition fees in the UK, the growth the overseas student market and associated income, an increase in research income for research intensive universities – so that of the 5 universities rated, 1 has been upgraded, another has had its outlook revised to ‘positive’, and no ratings were adjusted for the other three.

The Report also notes (p. 2) that the universities publicly rated by S&P’s are among the leading universities in the UK. To support this claim they refer to another ranking mechanism that is now providing information in the global marketplace – The Times Higher QS World Universities Rankings 2007, which is, as we have noted in a recent entry (‘Euro angsts‘), receiving considerable critical attention in Europe.

However, the Report Card also notes pressures within the system: higher wage demands linked to tuition increases, the search for new researchers to be counted as part of the UK’s Research Assessment Exercise (RAE), global competition for international students, and the heightened expectations of students for better infrastructure as a result of higher fees.

Longer term risks include the fact that by 2020, there will be 16% fewer 18 year olds coming through the system, according to forecasts by Universities UK – with the biggest impact being on the newer universities (in the UK these so-called ‘newer universities’ are previous polytechnics who were given university status in 1992).

Of the 20 UK universities rated in this S&P’s Report, 4 universities are rated AAA; 8 are rated AA; 6 are rated A, and 2 are rated BBB. The University of Bristol, as we can see from the analysts’ rating and comments which we have reproduced below, is given a relatively favorable rating. We have also quoted this rating at length to give you a sense of the kind of commentary made and how this relates to the judgment passed.


Credit rating agencies, as instruments of the global governance of higher education

Credit rating agencies are particularly powerful because both markets and governments see them as authoritative sources of judgment, with the result that they are major actors in controlling access to capital markets. And despite the evident importance of credit rating agencies on the governance of universities in the UK and elsewhere, there is a remarkable lack of attention to this phenomenon. We think there are important questions that need to be researched and the results discussed more widely. For example:

  • How widely spread is the practice?
  • Why are some universities rated whilst others are not?
  • Why are some universities’ ratings considered confidential whilst others are not (keeping in mind that they are all, in the above UK case, public taxpayer supported universities)?
  • Have any universities contested their credit rating, and if so, through what process, and with what outcome?
  • How do university’s management systems respond to these credit ratings, and in what ways might they influence ongoing policy decisions within the university and within the sector?
  • How robust are particular kinds of reputational or status ‘information’, such as World University Rankings, especially if we are looking at creditworthiness?

Our reports on these global rankings show that there are major problems with such measures. As we have profiled, and as has University Ranking Watch and the Beerkens’ Blog, there are clearly unresolved debates and major problems with global ranking schemes.

Clearly market liberalism, of the kind that has characterized this current period of globalization, requires new kinds of intermediaries to provide information for both buyer and seller. And it cannot hurt to have ‘outside’ assessments of the fiscal health of institutions (in this case universities) that are complex, often opaque, and taxpayer supported. However, to experts like Timothy Sinclair (2003), credit rating agencies privatize policymaking, and they can narrow the sphere of government intervention.

For EU Internal Market Commissioner, Charlie McCreevy, the credit ratings agencies like Moody’s and S&P’s contributed to the current financial market turmoil because they underestimated the risks related to their structured credit products. As the Commissioner commented in EurActiv in June.: “No supervisor appears to have got as much as a sniff of the rot at the heart of the structured finance rating process before it all blew up.”

In other words, credit rating agencies lack political accountability and enjoy an ‘accountability gap’. And while efforts are now under way by regulators to close that gap by developing new regulatory frameworks and rules, analysts worry that these private actors will now find new ways around the rules, and in turn facilitate the creation of a riskier financial architecture (as happened with global mortgage markets).

As universities become more financialized, as well as ranked, indexed and barometered in the ways we have been mapping on GlobalHigherEd, such ‘information’ on the sector will also likely be deployed to pass judgment and generate ratings and rankings of ‘creditworthiness’ for universities. The net effect may well be to exaggerate the differences between institutions, to generate greater levels of uneven development within and across the sector, and to increase rather then decrease the opacity and therefore accountability of the sector.

In sum, there is little doubt credit rating agencies, in passing judgments, play a key and increasingly important role in the global governance of higher education. It is also clear from these developments that we need to pay much closer attention to what might be thought of as mundane entities – credit rating agencies – and their role in the global governance of higher education. And we are also hopeful that credit ratings agencies will outline their views on this important dimension of the small g governance of higher education institutions.

Selected References

Bankers Almanac (2008) Standards and Poor’s Definitions, last accessed 5 August 2008.

King, M. and Sinclair, T. (2003) Private actors and public policy: a requiem for the new Basel Capital Accord, International Political Science Review, 24 (3), pp. 345-62.

Sinclair, T. (1994) Passing judgement: credit rating processes as regulatory mechanisms of governance in the emerging world order, Review of International Political Economy, 1 (1), pp. 133-159.

Sinclair, T. (2000) Reinventing authority: embedded knowledge networks and the new global finance, Environment and Planning C: Government and Policy, August 18 (4), pp. 487-502.

Sinclair, T. (2003) Global monitor: bond rating agencies, New Political Economy, 8 (1), pp. 147-161.

Sinclair, T. (2005) The New Masters of Capital: American Bond Rating Agencies and the Politics of Creditworthiness, New York: Cornell University Press.

Standard & Poor’s (2008) Report Card: UK Universities Enjoy Higher Revenues But Still Face Spending Pressures, London: Standards & Poor’s.

Susan Robertson and Kris Olds

Euro angsts, insights and actions regarding global university ranking schemes

The Beerkens’ blog noted, on 1 July, how the university rankings effect has even gone as far as reshaping immigration policy in the Netherlands. He included this extract, from a government policy proposal (‘Blueprint for a modern migration policy’):

Migrants are eligible if they received their degree from a university that is in the top 150 of two international league tables of universities. Because of the overlap, the lists consists of 189 universities…

Quite the authority being vetted in ranking schemes that are still in the process of being hotly debated!

On this broad topic, I’ve been traveling throughout Europe this academic year, pursuing a project not related to rankings, yet again and again rankings come up as a topic of discussion, reminding us of the de-facto global governance power of rankings (and the rankers). Ranking schemes, especially the Shanghai Jiao Tong University’s Academic Ranking of World Universities, and The Times Higher-QS World University Rankings are generating both governance impacts, and substantial anxiety, in multiple quarters.

In response, the European Commission is funding some research and thinking on the topic, while France’s new role in the rotating EU Presidency is supposed to lead to some further focus and attention over the next six months. More generally, here is a random list of European or Europe-based initiatives to examine the nature, impacts, and politics of global rankings:

And here are some recent or forthcoming events:

Yet I can’t help but wonder why Europe, which generally has high quality universities, despite some significant challenges, did not seek to shed light on the pros and cons of the rankings phenomenon any earlier. In other words, despite the critical mass of brainpower in Europe, what has hindered a collective, integrated, and well-funded interrogation of the ranking schemes from emerging before the ranking effects and path dependency started to take hold? Of course there was plenty of muttering, and some early research about rankings, and one could argue that I am viewing this topic through a rear view mirror, but Europe was, arguably, somewhat late in digging into this topic considering how much of an impact these assessment cum governance schemes are having.

So, if absence matters as much as presence in the global higher ed world, let’s ponder the absence of a serious European critique, or at least interrogation of, rankings and the rankers, until now. Let me put forward four possible explanations.

First, action at a European higher education scale has been focused upon bringing the European Higher Education Area to life via the Bologna Process, which was formally initiated in 1999. Thus there were only so many resources – intellectual and material – that could be allocated to higher education, so the Europeans are only now looking outwards to the power of rankings and the rankers. In short, key actors with a European higher education and research development vision have simply been too busy to focus on the rankings phenomenon and its effects.

A second explanation might be that European stakeholders are, deep down, profoundly uneasy about competition with respect to higher education, of which benchmarking and ranking is a part. But, as the Dublin Institute of Technology’s Ellen Hazelkorn notes in Australia’s Campus Review (27 May 2008):

Rankings are the latest weapon in the battle for world-class excellence. They are a manifestation of escalating global competition and the geopolitical search for talent, and are now a driver of that competition and a metaphor for the reputation race. What started out as an innocuous consumer product – aimed at undergraduate domestic students – has become a policy instrument, a management tool, and a transmitter of social, cultural and professional capital for the faculty and students who attend high-ranked institutions….

In the post-massification higher education world, rankings are widening the gap between elite and mass education, exacerbating the international division of knowledge. They inflate the academic arms race, locking institutions and governments into a continual quest for ever increasing resources which most countries cannot afford without sacrificing other social and economic policies. Should institutions and governments allow their higher education policy to be driven by metrics developed by others for another purpose?

It is worth noting that Ellen Hazelkorn is currently finishing an OECD-sponsored study on the effects of rankings.

In short, institutions associated with European higher education did not know how to assertively critique (or at least interrogate) ranking schemes as they never realized, until more recently, how ranking schemes are deeply geopolitical and geoeconomic vehicles that enable the powerful to maintain their standing, and harness yet even more resources inward. Angst regarding competition dulled senses to the intrinsically competitive logic of global university ranking schemes, and the political nature of their being.

Third, perhaps European elites, infatuated as they are with US Ivy League universities, or private institutions like Stanford, just accepted the schemes for the results summarized in this table from an OECD working paper (July 2007) written by Simon Marginson and Marijk van der Wende:

for they merely reinforced their acceptance of one form of American exceptionalism that has been acknowledged in Europe for some time. In other words, can one expect critiques of schemes that identify and peg, at the top, universities that many European elites would kill to send their children to, to emerge? I’m not so sure. As with Asia (where I worked from 1997-2001), and now in Europe, people seem infatuated with the standing of universities like Harvard, MIT, and Princeton, but these universities really operate in a parallel universe. Unless European governments, or the EU, are willing to establish 2-3 universities like King Abdullah University of Science and Technology (KAUST) in Saudi Arabia recently did with a $10 billion endowment, then angling to compete with the US privates should just be forgotten about. The new European Institute of Innovation and Technology (EIT) innovative as it may become, will not rearrange the rankings results, assuming they should indeed be rearranged.

Following what could be defined as a fait accompli phase, national and European political leaders came to progressively view the low status of European universities in the two key rankings schemes – Shanghai, and Times Higher – as a problematic situation. Why? The Lisbon Strategy emerges in 2000, was relaunched in 2005, and slowly starts to generate impacts, while also being continually retuned. Thus, if the strategy is to “become the most competitive and dynamic knowledge-based economy in the world, capable of sustainable economic growth with more and better jobs and greater social cohesion”, how can Europe become such a competitive global force when universities – key knowledge producers – are so far off fast emerging and now hegemonic global knowledge production maps?

In this political context, especially given state control over higher education budgets, and the relaunched Lisbon agenda drive, Europe’s rankers of ranking schemes were then propelled into action, in trebuchet-like fashion. 2010 is, after all, a key target date for a myriad of European scale assessments.

Fourth, Europe includes the UK, despite the feelings of many on both sides of the Channel. Powerful and well-respected institutions, with a wealth of analytical resources, are based in the UK, the global centre of calculation regarding bibliometrics (which rankings are a part of). Yet what role have universities like Oxford, Cambridge, Imperial College, UCL, and so on, or stakeholder organizations like Universities UK (UUK) and the Higher Education Funding Council for England (HEFCE), played in shedding light on the pros and cons of rankings for European institutions of higher education? I might be uninformed but the critiques are not emerging from the well placed, despite their immense experience with bibliometrics. In short as rankings aggregate data that works at a level of abstraction that hoves universities into view, and places UK universities highly (up there with Yale, Harvard and MIT), then these UK universities (or groups like UUK) will inevitably be concerned about their relative position, not the position of the broader regional system of which they are part, nor the rigour of the ranking methodologies. Interestingly, the vast majority of the above initiatives I listed only include representatives from universities that are ranked relatively low by the two main ranking schemes that now hold hegemonic power. I could also speculate on why the French contribution to the regional debate is limited, but will save that for another day.

These are but four of many possible explanations for why European higher education might have been relatively slow to grapple with the power and effects of university ranking schemes considering how much angst and impacts they generate. This said, you could argue, as Eric Beerkens has in the comments section below, that the European response was actually not late off the mark, despite what I argued above. The Shanghai rankings emerged in June 2003, and I still recall the attention they generated when they were first circulated. Three to five years for sustained action in some sectors is pretty quick, while in some sectors it is not.

In conclusion, it is clear that Europe has been destabilized by an immutable mobile – a regionally and now globally understood analytical device that holds together, travels across space, and is placed in reports, ministerial briefing notes, articles, PPT presentations, newspaper and magazine stories, etc. And it is only now that Europe is seriously interrogating the power of such devices, the data and methodologies that underly their production, and the global geopolitics and geoeconomics that they are part and parcel of.

I would argue that it is time to allocate substantial European resources to a deep, sustained, and ongoing analysis of the rankers, their ranking schemes, and associated effects. Questions remain, though, about how much light will be shed on the nature of university rankings schemes, what proposals or alternatives might emerge, and how the various currents of thought in Europe converge or diverge as some consensus is sought. Some institutions in Europe are actually happy that this ‘new reality’ has emerged for it is perceived to facilitate the ‘modernization’ of universities, enhance transparency at an intra-university scale, and elevate the role of the European Commission in European higher education development dynamics. Yet others equate rankings and classification schema with neoliberalism, commodification, and Americanization: this partly explains the ongoing critiques of the typology initiatives I linked to above, which are, to a degree, inspired by the German Excellence initiative, which is in turn partially inspired by a vision of what the US higher education system is.

Regardless, the rankings topic is not about to disappear. Let us hope that the controversies, debates, and research (current and future) inspire coordinated and rigorous European initiatives that will shed more light on this new form of defacto global governance. Why? If Europe does not do it, no one else will, at least in a manner that recognizes the diverse contributions that higher education can and should make to development processes at a range of scales.

Kris Olds

23 July update: see here for a review of a 2 juillet 2008 French Senate proposal to develop a new European ranking system that better reflects the nature of knowledge production (including language) in France and Europe more generally.  The full report (French only) can be downloaded here, while the press release (French only) can be read here.  France is, of course, going to publish a Senate report in French, though the likely target audience for the broader message (including a critique of the Shanghai Jiao Tong University’s Academic Ranking of World Universities) only partially understands French.  Yet in some ways it would have been better to have the report released simultaneously in both French and English.  But the contradictions of France critiquing dominant ranking schemes for their bias towards the English language, in English, was likely too much to take. In the end though, the French critique is well worth considering, and I can’t help but think that the EU or one of the many emerging initiatives above would be wise to have the report immediately translated and placed on some relevant websites so that it can be downloaded for review and debate.