The Business Side of World University Rankings

Over the last two years I’ve made the point numerous times here that world university rankings have become normalized on an annual cycle, and function as data acquisition mechanisms to drill deep into universities but in a way that encourages (seduces?) universities to provide the data for free. In reality, the data is provided at a cost given that the staff time allocated to produce the data needs to be paid for, and allocating staff time this way generates opportunity costs.

See below for the latest indicator of the business side of world university rankings. Interestingly today’s press release from Thomson Reuters (reprinted in full) makes no mention of world university rankings, nor Times Higher Education, the media outlet owned by TSL Education, which was itself acquired by Charterhouse Capital Partners in 2007. Recall that it was that Times Higher Education began working with Thomson Reuters in 2010.

The Institutional Profiles™ that are being marketed here derive data from “a combination of citation metrics from Web of KnowledgeSM, biographical information provided by institutions, and reputational data collected by Thomson Reuters Academic Reputation Survey,” all of which (apart form the citation metrics) come to the firm via the ‘Times Higher Education World University Rankings (powered by Thomson Reuters).’

Of course there is absolutely nothing wrong with providing services (for a charge) to enhance the management of universities, but would most universities (and their funding agencies) agree, from the start, to the establishment of a relationship where all data is provided for free to a centralized private authority headquartered in the US and UK, and then have this data both managed and monetized by the private authority? I’m not so sure.

This is arguably another case of universities thinking for themselves and not looking at the bigger picture. We have a nearly complete absence of collective action on this kind of developmental dynamic; one worthy of greater attention, debate, and oversight if not formal governance.

Kris Olds

<><><><>

12 Apr 2012

Thomson Reuters Improves Measurement of Universities’ Performance with New Data on Faculty Size, Reputation, Funding and Citation Measures

Comprehensive data now available in Institutional Profiles for universities such as Princeton, McGill, Nanyang Technological, University of Hong Kong and others

Philadelphia, PA, April 12, 2012 – The Intellectual Property & Science business of Thomson Reuters today announced the availability of 138 percent more performance indicators and nearly 20 percent more university data within Institutional Profiles™, the company’s online resource covering more than 500 of the world’s leading academic research institutions. This new data enables administrators and policy makers to reliably measure their institution’s performance and make international comparisons.

Using a combination of citation metrics from Web of KnowledgeSM, biographical information provided by institutions, and reputational data collected by Thomson Reuters Academic Reputation Survey, Institutional Profiles provides details on faculty size, student body, reputation, funding, and publication and citation data.

Two new performance indicators were also added to Institutional Profiles: International Diversity and Teaching Performance. These measure the global composition of staff and students, international co-authorship, and education input/output metrics, such as the ratio of students enrolled to degrees awarded in the same area. The indicators now cover 100 different areas, ensuring faculty and administrators have the most complete institutional data possible.

All of the data included in the tool has been vetted and normalized for accuracy. The latest update also includes several enhancements to existing performance indicators, such as Normalized Citation Impact. This allows for equally weighted comparisons between subject groups that have varying levels of citations.

“Institutional Profiles continues to provide answers to the questions that keep administrators up at night: ‘Beyond citation impact or mission statement, which institutions are the best collaboration partners for us to pursue? How can I understand the indicators and data that inform global rankings?’,” said Keith MacGregor, executive vice president at Thomson Reuters. “With this update, the tool provides the resources to reliably measure and compare academic and research performance in new and more complete ways, empowering strategic decision-making based on each institution’s unique needs.”

Institutional Profiles, a module within the InCites™ platform, is part of the research analytics suite of solutions provided by Thomson Reuters that supports strategic decision making and the evaluation and management of research. In addition to InCites, this suite of solutions includes consulting services, custom studies and reports, and Research in View™.

For more information, go to:
http://researchanalytics.thomsonreuters.com/institutionalprofiles/

About Thomson Reuters
Thomson Reuters is the world’s leading source of intelligent information for businesses and professionals. We combine industry expertise with innovative technology to deliver critical information to leading decision makers in the financial and risk, legal, tax and accounting, intellectual property and science and media markets, powered by the world’s most trusted news organization. With headquarters in New York and major operations in London and Eagan, Minnesota, Thomson Reuters employs approximately 60,000 people and operates in over 100 countries. For more information, go to http://www.thomsonreuters.com.

Contacts

Alyssa Velekei
Public Relations Specialist
Tel: +1 215 823 1894

Field-specific cultures of international research collaboration

Editors’ note: how can we better understand and map out the phenomenon of international research collaboration, especially in a context where bibliometrics does a patchy job with respect to registering the activities and output of some fields/disciplines? This is one of the questions Dr. Heike Jöns (Department of Geography, Loughborough University, UK) grapples with in this informative guest entry in GlobalHigherEd. The entry draws from Dr. Jöns’ considerable experience studying forms of mobility associated with the globalization of higher education and research.

Dr. Jöns (pictured above) received her PhD at the University of Heidelberg (Germany) and spent two years as a Feodor Lynen Postdoctoral Research Fellow of the Alexander von Humboldt Foundation at the University of Nottingham (UK). She is interested in the geographies of science and higher education, with particular emphasis on transnational academic mobility.

Further responses to ‘Understanding international research collaboration in the social sciences and humanities’, and Heike Jöns’ response below, are welcome at any time.

Kris Olds & Susan Robertson

~~~~~~~~~~~~~~~~~~~~~

The evaluation of research performance at European universities increasingly draws upon quantitative measurements of publication output and citation counts based on databases such as ISI Web of Knowledge, Scopus and Google Scholar (UNESCO 2010). Bibliometric indicators also inform annually published world university rankings such as the Shanghai and Times Higher Education rankings that have become powerful agents in contemporary audit culture despite their methodological limitations. Both league tables introduced field-specific rankings in 2007, differentiating between the natural, life, engineering and social sciences (both rankings), medicine (Shanghai) and the arts and humanities (Times Higher).

But to what extent do bibliometric indicators represent research output and collaborative cultures in different academic fields? This blog entry responds to this important question raised by Kris Olds (2010) in his GlobalHigherEd entry titled ‘Understanding international research collaboration in the social sciences and humanities‘ by discussing recent findings on field-specific research cultures from the perspective of transnational academic mobility and collaboration.

The inadequacy of bibliometric data for capturing research output in the arts and humanities has, for example, been demonstrated by Anssi Paasi’s (2005) study of international publishing spaces. Decisions about the journals that enter the respective databases, their bias towards English-language journals and their neglect of monographs and anthologies that dominate in fields dominated by individual authorship are just a few examples for the reasons of why citation indexes are not able to capture the complexity, place- and language-specificity of scholarship in the arts and humanities. Mapping the international publishing spaces in the sciences, the social sciences and the arts and humanities using ISI Web of Science data in fact suggests that the arts and humanities are less international and even more centred on the United States and Europe than the sciences (Paasi 2005: 781). Based on the analysis of survey data provided by 1,893 visiting researchers in Germany in the period 1954 to 2000, this GlobalHigherEd entry aims to challenge this partial view by revealing the hidden dimensions of international collaboration in the arts and humanities and elaborating on why research output and collaborative cultures vary not only between disciplines but also between different types of research work (for details, see Jöns 2007; 2009).

The visiting researchers under study were funded by the Humboldt Research Fellowship Programme run by the Alexander von Humboldt Foundation (Bonn, Germany). They came to Germany in order to pursue a specific research project at one or more host institutions for about a year. Striking differences in collaborative cultures by academic field and type of research work are revealed by the following three questions:

1. Could the visiting researchers have done their research project also at home or in any other country?

2. To what extent did the visiting researchers write joint publications with colleagues in Germany as a result of their research stay?

3. In which ways did the collaboration between visiting researchers and German colleagues continue after the research stay?

On question 1.

Research projects in the arts and humanities, and particularly those that involved empirical work, were most often tied to the research context in Germany. They were followed by experimental and theoretical projects in engineering and in the natural sciences, which were much more frequently possible in other countries as well (Figure 1).

Figure 1 — Possibility of doing the Humboldt research project in another country than Germany, 1981–2000 (Source: Jöns 2007: 106)

These differences in place-specificity are closely linked to different possibilities for mobilizing visiting researchers on a global scale. For example, the establishment of new research infrastructure in the physical, biological and technical sciences can easily raise scientific interest in a host country, whereas the mobilisation of new visiting researchers in the arts and humanities remains difficult as language skills and cultural knowledge are often necessary for conducting research projects in these fields. This is one reason for why the natural and technical sciences appear to be more international than the arts and humanities.

On question 2.

Joint publications with colleagues in Germany were most frequently written in physics, chemistry, medicine, engineering and the biological sciences that are all dominated by multi-authorship. Individual authorship was more frequent in mathematics and the earth sciences and most popular – but with considerable variations between different subfields – in the arts and humanities. The spectrum ranged from every second economist and social scientist, who wrote joint publications with colleagues in Germany, via roughly one third in language and cultural studies and history and every fifth in law to only every sixth in philosophy. Researchers in the arts and humanities had much more often than their colleagues from the sciences stayed in Germany for study and research prior to the Humboldt research stay (over 95% in the empirical arts and humanities compared to less than 40% in the theoretical technical sciences) as their area of specialisation often required learning the language and studying original sources or local research subjects. They therefore engaged much more closely with German language and culture than natural and technical scientists but due to the great individuality of their work, they produced not only considerably less joint publications than their apparently more international colleagues but their share of joint publications with German colleagues before and after the research stay was fairly similar (Figure 2).

Figure 2 — Joint publications of Humboldt research fellows and colleagues in Germany, 1981–2000 (Source: Jöns 2007: 107)

For these reasons, internationally co-authored publications are not suitable for evaluating the international attractiveness and orientation of different academic fields, particularly because the complexity of different types of research practices in one and the same discipline makes it difficult to establish typical collaborative cultures against which research output and collaborative linkages could be judged.

On question 3.

This is confirmed when examining continued collaboration with colleagues in Germany after the research stay. The frequency of continued collaboration did not vary significantly between disciplines but the nature of these collaborations differed substantially. Whereas regular collaboration in the natural and technical sciences almost certainly implied the publication of multi-authored articles in internationally peer-reviewed journals, continued interaction in the arts and humanities, and to a lesser extent in the social sciences, often involved activities beyond the co-authorship of journal articles. Table 1 documents some of these less well-documented dimensions of international research collaboration, including contributions to German-language scientific journals and book series as well as refereeing for German students, researchers and the funding agencies themselves.



Table 1 — Activities of visiting researchers in Germany after their research stay (in % of Humboldt research fellows 1954-2000; Source: Jöns 2009: 327)

The differences in both place-specificity and potential for co-authorship in different research practices can be explained by their particular spatial ontology. First, different degrees of materiality and immateriality imply varying spatial relations that result in typical patterns of place-specificity and ubiquity of research practices as well as of individual and collective authorship. Due to the corporeality of researchers, all research practices are to some extent physically embedded and localised. However, researchers working with physically embedded material research objects that might not be moved easily, such as archival material, field sites, certain technical equipment, groups of people and events, may be dependent on accessing a particular site or local research context at least once. Those scientists and scholars, who primarily deal with theories and thoughts, are in turn as mobile as the embodiment of these immaterialities (e.g., collaborators, computers, books) allows them to be. Theoretical work in the natural sciences, including, for example, many types of mathematical research, thus appears to be the most ‘ubiquitous’ subject: Its high share of immaterial thought processes compared to relatively few material resources involved in the process of knowledge production (sometimes only pen and paper) would often make it possible, from the perspective of the researchers, to work in a number of different places (Figure 1, above).

Second, the constitutive elements of research vary according to their degree of standardisation. Standardisation results from the work and agreement previously invested in the classification and transformation of research objects. A high degree of standardisation would mean that the research practice relies on many uniform terms, criteria, formulas and data, components and materials, methods, processes and practices that are generally accepted in the particular field of academic work. Field sites, for example, might initially show no signs of standardisation, whereas laboratory equipment such as test tubes may have been manufactured on the basis of previous – and then standardised – considerations and practices. The field site may be unique, highly standardised laboratory equipment may be found at several sites to which the networks of science have been extended, thereby offering greater flexibility in the choice of the research location. In regard to research practices with a higher degree of immateriality, theoretical practices in the natural and technical sciences show a higher degree of standardisation (e.g., in terms of language) when compared to theoretical and argumentative-interpretative work in the arts and humanities and thus are less place-specific and offer more potential for co-authorship (Figures 1 and 2).

The resulting two dimensional matrix on the spatial relations of different research practices accommodates the empirically observed differences of both the place-specificity of the visiting researchers’ projects and their resulting joint publications with colleagues in Germany (Figure 3):

Figure 3 — A two-dimensional matrix on varying spatial relations of different research practices (Source: Jöns 2007: 109)

Empirical work, showing a high degree of materiality and a low degree of standardisation, is most often dependent on one particular site, followed by argumentative-interpretative work, which is characterised by a similar low degree of standardisation but a higher degree of immateriality. Experimental (laboratory) work, showing a high degree of both materiality and standardisation, can often be conducted in several (laboratory) sites, while theoretical work in the natural sciences, involving both a high degree of immateriality and standardisation is most rarely tied to one particular site. The fewest joint publications were written in argumentative-interpretative work, where a large internal (immaterial) research context and a great variety of arguments from different authors in possibly different languages complicate collaboration on a specific topic. Involving an external (material) and highly standardised research context, the highest frequency of co- and multi-authorship was to be found in experimental (laboratory) work. In short, the more immaterial and standardised the research practice, the lower is the place-specificity of one’s work and the easier it would be to work at home or elsewhere; and the more material and standardised the research practice, the more likely is collaboration through co- and multi-authorship.

Based on this work, it can be concluded – in response to two of Kris Olds’ (2010) key questions – that international research collaboration on a global scale can be mapped – if only roughly – for research practices characterised by co- and multi-authorship in internationally peer-reviewed English language journals as the required data is provided by citation databases (e.g., Wagner and Leydesdorff 2005; Adams et al. 2007; Leydesdorff and Persson 2010; Matthiessen et al. 2010; UNESCO 2010). When interpreting such mapping exercises, however, one needs to keep in mind that the data included in ISI Web of Knowledge, Scopus and Google Scholar do itself vary considerably.

Other research practices require different research methods such as surveys and interviews and thus can only be mapped from specific perspectives such as individual institutions or groups of researchers (for the application of bibliometrics to individual journals in the arts and humanities, see Leydesdorff and Salah 2010). It might be possible to create baseline studies that help to judge the type and volume of research output and international collaboration against typical patterns in a field of research but the presented case study has shown that the significance of specific research locations, of individual and collective authorship, and of different types of transnational collaboration varies not only between academic fields but also between research practices that crisscross conventional disciplinary boundaries.

In the everyday reality of departmental research evaluation this means that in fields such as geography, a possible benchmark of three research papers per year may be easily produced in most fields of physical geography and some fields of human geography (e.g. economic and social) whereas the nature of research practices in historical and cultural geography, for example, might make it difficult to maintain such a high research output over a number of subsequent years. Applying standardised criteria of research evaluation to the great diversity of publication and collaboration cultures inevitably bears the danger of leading to a standardisation of academic knowledge production.

Heike Jöns

References

Adams J, Gurney K and Marshall S 2007 Patterns of international collaboration for the UK and leading partners Evidence Ltd., Leeds

Jöns H 2007 Transnational mobility and the spaces of knowledge production: a comparison of global patterns, motivations and collaborations in different academic fields Social Geography 2 97-114  Accessed 23 September 2010

Jöns H 2009 ‘Brain circulation’ and transnational knowledge networks: studying long-term effects of academic mobility to Germany, 1954–2000 Global Networks 9 315-38

Leydesdorff L and Persson O 2010 Mapping the geography of science: distribution patterns and networks of relations among cities and institutes Journal of the American Society for Information Science & Technology 6 1622-1634

Leydesdorff L and Salah A A A 2010 Maps on the basis of the Arts &Humanities Citation Index: the journals Leonardo and Art Journal, and “Digital Humanities” as a topic Journal of the American Society for Information Science and Technology 61 787-801

Matthiessen C W, Schwarz A W and Find S 2010 World cities of scientific knowledge: systems, networks and potential dynamics. An analysis based on bibliometric indicators Urban Studies 47 1879-97

Olds K 2010 Understanding international research collaboration in the social sciences and humanities GlobalHigherEd 20 July 2010  Accessed 23 September 2010

Paasi A 2005 Globalisation, academic capitalism, and the uneven geographies of international journal publishing spaces Environment and Planning A 37 769-89

UNESCO 2010 World Social Science Report: Knowledge Divides UNESCO, Paris

Wagner C S and Leydesdorff L 2005 Mapping the network of global science: comparing international co-authorships from 1990 to 2000 International Journal of Technology and Globalization 1 185–208


THE-QS World University Rankings 2009: Year 6 of market making

THE-QSemailWell, an email arrived today and I just could not help myself…I clicked on the THE-QS World University Rankings 2009 links that were provided to see who received what ranking.  In addition, I did a quick Google scan of news outlets and weblogs to see what spins were already underway.

The THE-QS ranking seems to have become the locomotive for the Times Higher Education, a higher education newsletter that is published in the UK once per week.  In contrast to the daily Chronicle of Higher Education, and the daily Inside Higher Ed (both based in the US), the Times Higher Education seems challenged to provide quality content of some depth even on its relatively lax once per week schedule.  I spent four years in the UK in the mid-1990s, and can’t help but note the decline in the quality of the coverage of UK higher education news over the last decade plus.

It seems as if the Times Higher has decided to allocate most of its efforts to promoting the creation and propagation of this global ranking scheme in contrast to providing detailed, analytical, and critical coverage of issues in the UK, let alone in the European Higher Education Area. Six steady years of rankings generate attention, advertising revenue, and enhance some aspects of power and perceived esteem.  But, in the end, where is the Times Higher in analyzing the forces shaping the systems in which all of these universities are embedded, or the complex forces shaping university development strategies?  Rather, we primarily seem to get increasingly thin articles, based on relatively limited original research, heaps of advertising (especially jobs), and now regular build-ups to the annual rankings frenzy. In addition, their partnership with QS Quacquarelli Symonds is leading to new regional rankings; a clear form of market-making at a new unexploited geographic scale.  Of course there are some useful insights generated by rankings, but the rankings attention is arguably making the Times Higher lazier and dare I say, irresponsible, given the increasing significance of higher education to modern societies and economies.

In addition, I continue to be intrigued by how UK-based analysts and institutions seem infatuated with the term “international”, as if it necessarily means better quality than “national”. See, for example, the “international” elements of the current ranking in the figure below:

THEQSscore

Leaving aside my problems with the limited scale of the survey numbers (9,386 academics represent the “world’s” academics?; 3,281 firm representatives represent the “world’s” employers?), and the approach to weighting, why does the proportion of “international” faculty and students necessarily enhance the quality of university life?

Some universities, especially in Australasia and the UK, seek high proportions of international students to compensate for declining levels of government support, and weak levels of extramural funding via research income (which provides streams of income via overhead charges). Thus the higher number of international students may be, in some cases, inversely related to the quality of the university or the health of the public higher education system in which the university is embedded.

In addition, in some contexts, universities are legally required to limit “non-resident” student intake given the nature of the higher education system in place.  But in the metrics used here universities with the incentives and the freedom to let in large numbers of foreign students , for reasons other than the quality of said students, are rewarded with a higher rank.

The discourse of “international” is elevated here, much like it was in the last Research Assessment Exercise (RAE) in the UK, with “international” codeword for higher quality.  But international is just that – international – and it means nothing more than that unless we assess how good they (international students and faculty) are, what they contribute to the educational experience, and what lasting impacts they generate.

In any case, the THE-QS rankings are out.  The relative position of universities in the rankings will be debated about, and used to provide legitimacy for new or previously unrecognized claims. But it’s really the methodology that needs to be unpacked, as well as the nature and logics of the rankers, versus just the institutions that are being ranked.

Kris Olds

CRELL: critiquing global university rankings and their methodologies

This guest entry has been kindly prepared for us by Beatrice d’Hombres and Michaela Saisana of the EU-funded Centre for Research on Lifelong Learning (CRELL) and Joint Research Centre. This entry is part of a series on the processes and politics of global university rankings (see herehere, here and here).

beatriceSince 2006, Beatrice d’Hombres has been working in the Unit of Econometrics and Statistics of the Joint Research Centre of  the European Commission. She is part of the Centre for Research on Lifelong Learning. Beatrice is an economist who completed a PhD at the University of Auvergne (France). She has a particular expertise in education economics and applied econometrics.

michaela

Michaela Saisana works for the Joint Research Centre (JRC) of the European Commission at the Unit of Econometrics and Applied Statistics. She has a PhD in Chemical Engineering and in 2004 she won the European Commission – JRC Young Scientist Prize in Statistics and Econometrics for her contribution on the robustness assessment of composite indicators and her work on sensitivity analysis.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The expansion of the access to higher education, the growing mobility of students, the need for economic rationale behind the allocation of public funds, together with the demand for higher accountability and transparency, have all contributed to raise the need for comparing university quality across countries.

The recognition of this fact has also been greatly stirred  by the publication, since 2003, of the ‘Shanghai Jiao Tong University Academic Ranking of World Universities’ (henceforth SJTU), which measures university research performance across the world. The SJTU ranking tends to reinforce the evidence that the US is well ahead of Europe in terms of cutting-edge university research.

Its rival is the ranking computed annually, since 2004, by the Times Higher Education Supplement (henceforth THES). Both these rankings are now receiving worldwide attention and constitute an occasion for national governments to comment on the relative performances of their national universities.

In France, for example, the publication of the SJTU is always associated with a surge of articles in newspapers which either bemoan  the poor performance of French universities or denounce the inadequacy of the SJTU ranking to properly assess the attractiveness of the fragmented French higher education institutions landscape (see Les Echos, 7 August 2008).

Whether the intention of the rankers or not, university rankings have followed a destiny of their own and are used by national policy makers to stimulate debates about national university systems and ultimately can lead to specific education policies orientations.

At the same time, however, these rankings are subject to a plethora of criticism. They outline that the chosen indicators are mainly based on research performance with no attempt to take into account the others missions of universities (in particular teaching), and are biased towards large, English-speaking and hard-science institutions. Whilst the limitations of the indicators underlying the THES or the SJTU rankings have been extensively discussed in the relevant literature, there has been no attempt so far to examine in depth the volatility of the university ranks to the methodological assumptions made in compiling the rankings.

crell3The purpose of the JRC/Centre for Research on Lifelong Learning (CRELL) report is to fill in this gap by quantifying how much university rankings depend on the methodology and to reveal whether the Shanghai ranking serves the purposes it is used for, and if its immediate European alternative, the British THES, can do better.

To that end, we carry out a thorough uncertainty and sensitivity analysis of the 2007 SJTU and THES rankings under a plurality of scenarios in which we activate simultaneously different sources of uncertainty. The sources cover a wide spectrum of methodological assumptions (set of selected indicators, weighting scheme, and aggregation method).

This implies that we deviate from the classic approach – also taken in the two university ranking systems – to build a composite indicator by a simple weighted summation of indicators. Subsequently, a frequency matrix of the university ranks is calculated across the different simulations. Such a multi-modeling approach and the presentation of the frequency matrix, rather than the single ranks, allows one to deal with the criticism, often made to league tables and rankings systems ,that ranks are presented as if they were calculated under conditions of certainty while this is rarely the case.  crell

The main findings of the report are the following. Both rankings are only robust in the identification of the top 15 performers on either side of the Atlantic, but unreliable on the exact ordering of all other institutes. And, even when combining all twelve indicators in a single framework, the space of the inference is too wide for about 50 universities of the 88 universities we studied and thus no meaningful rank can be estimated for those universities. Finally, the JRC report suggests that THES and SJTU rankings should be improved along two main directions:

  • first, the compilation of university rankings should always be accompanied by a robustness analysis based on a multi-modeling approach. We believe that this could constitute an additional recommendation to be added to the already 16 existing Berlin Principles;
  • second, it is necessary to revisit the set of indicators, so as to enrich it with other dimensions that are crucial to assessing university performance and which are currently missing.

Beatrice d’Hombres  and Michaela Saisana

Multi-scalar governance technologies vs recurring revenue: the dual logics of the rankings phenomenon

Our most recent entry (‘University Systems Ranking (USR)’: an alternative ranking framework from EU think-tank‘) is getting heavy traffic these days, a sign that the rankings phenomenon just won’t go away.  Indeed there is every sign that debates about rankings will be heating up over the next 1-2 year in particular, courtesy of the desire of stakeholders to better understand rankings, generate ‘recurring revenue’ off of rankings, and provide new governance technologies to restructure higher education and research systems.

This said I continue to be struck, as I travel to selective parts of the world for work, by the diversity of scalar emphases at play.

eiffeleu1In France, for example, the broad discourse about rankings elevates the importance of the national (i.e., French) and regional (i.e., European) scales, and only then does the university scale (which I will refer to as the institutional scale in this entry) come into play in importance terms. This situation reflects the strong role of the national state in governing and funding France’s higher education system, and France’s role in European development debates (including, at the moment, presidency of the Council of the European Union).

In UK it is the disciplinary/field and then the institutional scales that matter most, with the institutional made up of a long list of ranked disciplines/fields. Once the new Research Assessment Exercise (RAE) comes out in late 2008 we will see the institutional assess the position of each of their disciplines/fields, which will then lead to more support or relatively rapid allocation of the hatchet at the disciplinary/field level. This is in part because much national government funding (via the Higher Education Funding Council for England (HEFCE), the Scottish Funding Council (SFC), the Higher Education Funding Council for Wales (HEFCW) and the Department for Employment and Learning, Northern Ireland (DEL)) to each university is structurally dependent upon the relative rankings of each university’s position in the RAE, which is the aggregate effect of the position of the array of fields/disciplines in any one university (see this list from the University of Manchester for an example). The UK is, of course, concerned about its relative place in the two main global ranking schemes, but it doing well at the moment so the scale of concern is of a lower order than most other countries (including all other European countries). Credit rating agencies also assess and factor in rankings with respect to UK universities (e.g. see ‘Passing judgment’: the role of credit rating agencies in the global governance of UK universities‘).

In the US – supposedly the most marketized of contexts – there is highly variably concern with rankings.  Disciplines/fields ranked by media outlets like U.S. News & World Report are concerned, to be sure, but U.S. News & World Report does not allocate funding. Even the National Research Council (NRC) rankings matter less in the USA given that its effects (assuming it eventually comes out following multiple delays) are more diffuse. The NRC rankings are taken note of by deans and other senior administrators, and also faculty, albeit selectively. Again, there is no higher education system in the US – there are systems. I’ve worked in Singapore, England and the US as a faculty member and the US is by far the least addled or concerned by ranking systems, for good and for bad.

While the diversity of ranking dispositions at the national and institutional levels is heterogeneous in nature, the global rankings landscape is continuing to change, and quickly. In the remainder of this entry we’ll profile but two dimensions of the changes.

Anglo-American media networks and recurrent revenue

ustheFirst, new key media networks, largely Anglo-American private sector networks, have become intertwined.  As Inside Higher Ed put it on 24 November:

U.S. News & World Report on Friday announced a new, worldwide set of university rankings — which is really a repackaging of the international rankings produced this year in the Times Higher Education-QS World University Rankings. In some cases, U.S. News is arranging the rankings in different ways, but Robert Morse, director of rankings at the magazine, said that all data and the methodology were straight from the Times Higher’s rankings project, which is affiliated with the British publication about higher education. Asked if his magazine was just paying for reprint rights, Morse declined to discuss financial arrangements. But he said that it made sense for the magazine to look beyond the United States. “There is worldwide competition for the best faculty, best students and best research grants and researchers,” he said. He also said that, in the future, U.S. News may be involved in the methodology. Lloyd Thacker, founder of the Education Conservancy and a leading critic of U.S. News rankings, said of the magazine’s latest project: “The expansion of a business model that has profited at the expense of education is not surprising. This could challenge leaders to distinguish American higher education by providing better indicators of quality and by helping us think beyond ranking.”

This is an unexpected initiative, in some ways, given that the Times Higher Education-QS World University Rankings are already available on line and US New and World Report is simply repackaging these for sale in the American market. Yet if you adopt a market-making perspective this joint venture makes perfect sense. Annual versions of the Times Higher Education-QS World University Rankings will be reprinted in a familiar (to US readers) format, thereby enabling London-based TSL Education Ltd., London/Paris/Singapore-based QS Quacquarelli Symonds, and Washington DC-based U.S. News and World Report to generate recurring revenue with little new effort (apart from repackaging and distribution in the US). The enabling mechanism is, in this case, reprint rights fees. As we have noted before, this is a niche industry in formation, indeed.

More European angst and action

And second, at the regional level, European angst (an issue we profiled on 6 July in ‘Euro angsts, insights and actions regarding global university ranking schemes‘) about the nature and impact of rankings is leading to the production of critical reports on rankings methodologies, the sponsorship of high powered multi-stakeholder workshops, and the emergence of new proposals for European ranking schemes.

ecjrccoverSee, for example, this newly released report on rankings titled Higher Education Rankings: Robustness Issues and Critical Assessment, which is published by the European Commission Joint Research Centre, Institute for the Protection and Security of the Citizen, Centre for Research on Lifelong Learning (CRELL)

The press release is here, and a detailed abstract of the report is below:

The Academic Ranking of World Universities carried out annually by the Shanghai’s Jiao Tong University (mostly known as the ‘Shanghai ranking’) has become, beyond the intention of its developers, a reference for scholars and policy makers in the field of higher education. For example Aghion and co-workers at the Bruegel think tank use the index – together with other data collected by Bruegel researchers – for analysis of how to reform Europe’s universities, while French President Sarkozy has stressed the need for French universities to consolidate in order to promote their ranking under Jiao Tong. Given the political importance of this field the preparation of a new university ranking system is being considered by the French ministry of education.

The questions addressed in the present analysis is whether the Jiao Tong ranking serves the purposes it is used for, and whether its immediate European alternative, the British THES, can do better.

Robustness analysis of the Jiao Tong and THES ranking carried out by JRC researchers, and of an ad hoc created Jiao Tong-THES hybrid, shows that both measures fail when it comes to assessing Europe’s universities. Jiao Tong is only robust in the identification of the top performers, on either side of the Atlantic, but quite unreliable on the ordering of all other institutes. Furthermore Jiao Tong focuses only on the research performance of universities, and hence is based on the strong assumption that research is a universal proxy for education. THES is a step in the right direction in that it includes some measure of education quality, but is otherwise fragile in its ranking, undeniably biased towards British institutes and somehow inconsistent in the relation between subjective variables (from surveys) and objective data (e.g. citations).

JRC analysis is based on 88 universities for which both the THES and Jiao Tong rank were available. European universities covered by the present study thus constitute only about 0.5% of the population of Europe’s universities. Yet the fact that we are unable to reliably rank even the best European universities (apart from the 5 at the top) is a strong call for a better system, whose need is made acute by today’s policy focus on the reform of higher education. For most European students, teachers or researchers not even the Shanghai ranking – taken at face value and leaving aside the reservations raised in the present study – would tell which university is best in their own country. This is a problem for Europe, committed to make its education more comparable, its students more mobile and its researchers part of a European Research Area.

Various attempts in EU countries to address the issue of assessing higher education performance are briefly reviewed in the present study, which offers elements of analysis of which measurement problem could be addressed at the EU scale. [my emphasis]

While ostensibly “European”, does it really matter that the Times Higher Education-QS World University Ranking is produced by firms with European headquarters, while the Jiao Tong ranking is produced by an institution based in China?

The divergent logics underlying the production of discourses about rankings are also clearly visible in two related statements. At the bottom of the European Commission’s Joint Research Centre report summarized above we see “Reproduction is authorised provided the source is acknowledged”, while the Times Higher Education-QS World University Rankings, a market-making discourse, is accompanied by a lengthy copyright warning that can be viewed here.

Yet do not, for a minute, think that ‘Europe’ does not want to be ranked, or use rankings, as much if not more than any Asian or American or Australian institution. At a disciplinary/field level, for example, debates are quickly unfolding about the European Reference Index for the Humanities (ERIH), a European Science Foundation (ESF) backed initiative that has its origins in deliberations about the role of the humanities in the European Research Area. The ESF frames it this way:

Humanities research in Europe is multifaceted and rich in lively national, linguistic and intellectual traditions. Much of Europe’s Humanities scholarship is known to be first rate. However, there are specifities of Humanities research, that can make it difficult to assess and compare with other sciences. Also,  it is not possible to accurately apply to the Humanities assessment tools used to evaluate other types of research. As the transnational mobility of researchers continues to increase, so too does the transdisciplinarity of contemporary science. Humanities researchers must position themselves in changing international contexts and need a tool that offers benchmarking. This is why ERIH (European Reference Index for the Humanities) aims initially to identify, and gain more visibility for top-quality European Humanities research published in academic journals in, potentially, all European languages. It is a fully peer-reviewed, Europe-wide process, in which 15 expert panels sift and aggregate input received from funding agencies, subject associations and specialist research centres across the continent. In addition to being a reference index of the top journals in 15 areas of the Humanities, across the continent and beyond, it is intended that ERIH will be extended to include book-form publications and non-traditional formats. It is also intended that ERIH will form the backbone of a fully-fledged research information system for the Humanities.

See here for a defense of this ranking system by Michael Worton (Vice-Provost, University College London, and a member of the ERIH steering committee).  I was particularly struck by this comment:

However, the aim of the ERIH is not to assess the quality of individual outputs but to assess dissemination and impact. It can therefore provide something that the RAE cannot: it can be used for aggregate benchmarking of national research systems to determine the international standing of research carried out in a particular discipline in a particular country.

Link here for a Google weblog search on this debate, while a recent Chronicle of Higher Education article (‘New Ratings of Humanities Journals Do More Than Rank — They Rankle’) is also worth reviewing.

Thus we see a new rankings initiative emerging to enable (in theory) Europe to better codify its highly developed humanities presence on the global research landscape, but in a way that will enable national (at the intra-European scale) peaks (and presumably) valleys of quality output to be mapped for the humanities, but also for specific disciplines/fields. Imagine the governance opportunities available, at multiple scales, if this scheme is operationalized.

And finally, at the European scale again, University World News noted, on 23 November, that:

The European Union is planning to launch its own international higher education rankings, with emphasis on helping students make informed choices about where to study and encouraging their mobility. Odile Quintin, the European Commission’s Director-General of Education and Culture, announced she would call for proposals before the end of the year, with the first classification appearing in 2010.

A European classification would probably be compiled along the same lines as the German Centre for Higher Education Development Excellence Ranking.

European actors are being spurred into such action by multiple forces, some internal (including the perceived need to ‘modernize European universities in the context of Lisbon and the European Research Area), some external (Shanghai Jiao Tong; Times Higher QS), and some of a global dimension (e.g., audit culture; competition for mobile students).

eurankingsprogThis latest push is also due to the French presidency of the Council of the European Union, as noted above, which is facilitating action at the regional and national scales. See, for example, details on a Paris-based conference titled ‘International comparison of education systems: a european model?’ which was held on 13-14 November 2008. As noted in the programme, the:

objective of the conference is to bring to the fore the strengths and weaknesses of the different international and European education systems, while highlighting the need for regular and objective assessment of the reforms undertaken by European Member States by means of appropriate indicators. It will notably assist in taking stock of:
– the current state and performance of the different European education systems:
– the ability of the different European education systems to curb the rate of failure in schools,
– the relative effectiveness of amounts spent on education by the different Member States.

The programme and list of speakers is worth perusing to acquire a sense of the broad agenda being put forward.

Multi-scalar governance vs (?) recurring revenue: the emerging dual logics of the rankings phenomenon

The rankings phenomenon is here to stay. But which logics will prevail, or at least emerge as the most important in shaping the extension of audit culture into the spheres of higher education and research?  At the moment it appears that the two main logics are:

  • Creating a new niche industry to form markets and generate recurrent revenue; and,
  • Creating new multi-scalar governance technologies to open up previously opaque higher education and research systems, so as to facilitate strategic restructuring for the knowledge economy.

These dual logics are in some ways contradictory, yet in other ways they are interdependent. This is a phenomenon that also has deep roots in the emerging centres of global higher ed and research calculation that are situated in London, Shanghai, New York, Brussels, and Washington DC.  And it is underpinned by the analytical cum revenue generating technologies provided by the Scientific division of Thomson Reuters, which develops and operates the ISI Web of Knowledge.

Market-making and governance enabling…and all unfolding before our very eyes. Yet do we really know enough about the nature of the unfolding process, including the present and absent voices, that seems to be bringing these logics to the fore?

Kris Olds

Times Higher Education – QS World University Rankings (2008): a niche industry in formation?

The new Times Higher Education – QS World University Rankings (2008) rankings were just released, and the copyright regulations deepen and extend, push and pull, enable and constrain.  Global rankings: a niche industry in formation?

Kris Olds

Euro angsts, insights and actions regarding global university ranking schemes

The Beerkens’ blog noted, on 1 July, how the university rankings effect has even gone as far as reshaping immigration policy in the Netherlands. He included this extract, from a government policy proposal (‘Blueprint for a modern migration policy’):

Migrants are eligible if they received their degree from a university that is in the top 150 of two international league tables of universities. Because of the overlap, the lists consists of 189 universities…

Quite the authority being vetted in ranking schemes that are still in the process of being hotly debated!

On this broad topic, I’ve been traveling throughout Europe this academic year, pursuing a project not related to rankings, yet again and again rankings come up as a topic of discussion, reminding us of the de-facto global governance power of rankings (and the rankers). Ranking schemes, especially the Shanghai Jiao Tong University’s Academic Ranking of World Universities, and The Times Higher-QS World University Rankings are generating both governance impacts, and substantial anxiety, in multiple quarters.

In response, the European Commission is funding some research and thinking on the topic, while France’s new role in the rotating EU Presidency is supposed to lead to some further focus and attention over the next six months. More generally, here is a random list of European or Europe-based initiatives to examine the nature, impacts, and politics of global rankings:

And here are some recent or forthcoming events:

Yet I can’t help but wonder why Europe, which generally has high quality universities, despite some significant challenges, did not seek to shed light on the pros and cons of the rankings phenomenon any earlier. In other words, despite the critical mass of brainpower in Europe, what has hindered a collective, integrated, and well-funded interrogation of the ranking schemes from emerging before the ranking effects and path dependency started to take hold? Of course there was plenty of muttering, and some early research about rankings, and one could argue that I am viewing this topic through a rear view mirror, but Europe was, arguably, somewhat late in digging into this topic considering how much of an impact these assessment cum governance schemes are having.

So, if absence matters as much as presence in the global higher ed world, let’s ponder the absence of a serious European critique, or at least interrogation of, rankings and the rankers, until now. Let me put forward four possible explanations.

First, action at a European higher education scale has been focused upon bringing the European Higher Education Area to life via the Bologna Process, which was formally initiated in 1999. Thus there were only so many resources – intellectual and material – that could be allocated to higher education, so the Europeans are only now looking outwards to the power of rankings and the rankers. In short, key actors with a European higher education and research development vision have simply been too busy to focus on the rankings phenomenon and its effects.

A second explanation might be that European stakeholders are, deep down, profoundly uneasy about competition with respect to higher education, of which benchmarking and ranking is a part. But, as the Dublin Institute of Technology’s Ellen Hazelkorn notes in Australia’s Campus Review (27 May 2008):

Rankings are the latest weapon in the battle for world-class excellence. They are a manifestation of escalating global competition and the geopolitical search for talent, and are now a driver of that competition and a metaphor for the reputation race. What started out as an innocuous consumer product – aimed at undergraduate domestic students – has become a policy instrument, a management tool, and a transmitter of social, cultural and professional capital for the faculty and students who attend high-ranked institutions….

In the post-massification higher education world, rankings are widening the gap between elite and mass education, exacerbating the international division of knowledge. They inflate the academic arms race, locking institutions and governments into a continual quest for ever increasing resources which most countries cannot afford without sacrificing other social and economic policies. Should institutions and governments allow their higher education policy to be driven by metrics developed by others for another purpose?

It is worth noting that Ellen Hazelkorn is currently finishing an OECD-sponsored study on the effects of rankings.

In short, institutions associated with European higher education did not know how to assertively critique (or at least interrogate) ranking schemes as they never realized, until more recently, how ranking schemes are deeply geopolitical and geoeconomic vehicles that enable the powerful to maintain their standing, and harness yet even more resources inward. Angst regarding competition dulled senses to the intrinsically competitive logic of global university ranking schemes, and the political nature of their being.

Third, perhaps European elites, infatuated as they are with US Ivy League universities, or private institutions like Stanford, just accepted the schemes for the results summarized in this table from an OECD working paper (July 2007) written by Simon Marginson and Marijk van der Wende:

for they merely reinforced their acceptance of one form of American exceptionalism that has been acknowledged in Europe for some time. In other words, can one expect critiques of schemes that identify and peg, at the top, universities that many European elites would kill to send their children to, to emerge? I’m not so sure. As with Asia (where I worked from 1997-2001), and now in Europe, people seem infatuated with the standing of universities like Harvard, MIT, and Princeton, but these universities really operate in a parallel universe. Unless European governments, or the EU, are willing to establish 2-3 universities like King Abdullah University of Science and Technology (KAUST) in Saudi Arabia recently did with a $10 billion endowment, then angling to compete with the US privates should just be forgotten about. The new European Institute of Innovation and Technology (EIT) innovative as it may become, will not rearrange the rankings results, assuming they should indeed be rearranged.

Following what could be defined as a fait accompli phase, national and European political leaders came to progressively view the low status of European universities in the two key rankings schemes – Shanghai, and Times Higher – as a problematic situation. Why? The Lisbon Strategy emerges in 2000, was relaunched in 2005, and slowly starts to generate impacts, while also being continually retuned. Thus, if the strategy is to “become the most competitive and dynamic knowledge-based economy in the world, capable of sustainable economic growth with more and better jobs and greater social cohesion”, how can Europe become such a competitive global force when universities – key knowledge producers – are so far off fast emerging and now hegemonic global knowledge production maps?

In this political context, especially given state control over higher education budgets, and the relaunched Lisbon agenda drive, Europe’s rankers of ranking schemes were then propelled into action, in trebuchet-like fashion. 2010 is, after all, a key target date for a myriad of European scale assessments.

Fourth, Europe includes the UK, despite the feelings of many on both sides of the Channel. Powerful and well-respected institutions, with a wealth of analytical resources, are based in the UK, the global centre of calculation regarding bibliometrics (which rankings are a part of). Yet what role have universities like Oxford, Cambridge, Imperial College, UCL, and so on, or stakeholder organizations like Universities UK (UUK) and the Higher Education Funding Council for England (HEFCE), played in shedding light on the pros and cons of rankings for European institutions of higher education? I might be uninformed but the critiques are not emerging from the well placed, despite their immense experience with bibliometrics. In short as rankings aggregate data that works at a level of abstraction that hoves universities into view, and places UK universities highly (up there with Yale, Harvard and MIT), then these UK universities (or groups like UUK) will inevitably be concerned about their relative position, not the position of the broader regional system of which they are part, nor the rigour of the ranking methodologies. Interestingly, the vast majority of the above initiatives I listed only include representatives from universities that are ranked relatively low by the two main ranking schemes that now hold hegemonic power. I could also speculate on why the French contribution to the regional debate is limited, but will save that for another day.

These are but four of many possible explanations for why European higher education might have been relatively slow to grapple with the power and effects of university ranking schemes considering how much angst and impacts they generate. This said, you could argue, as Eric Beerkens has in the comments section below, that the European response was actually not late off the mark, despite what I argued above. The Shanghai rankings emerged in June 2003, and I still recall the attention they generated when they were first circulated. Three to five years for sustained action in some sectors is pretty quick, while in some sectors it is not.

In conclusion, it is clear that Europe has been destabilized by an immutable mobile – a regionally and now globally understood analytical device that holds together, travels across space, and is placed in reports, ministerial briefing notes, articles, PPT presentations, newspaper and magazine stories, etc. And it is only now that Europe is seriously interrogating the power of such devices, the data and methodologies that underly their production, and the global geopolitics and geoeconomics that they are part and parcel of.

I would argue that it is time to allocate substantial European resources to a deep, sustained, and ongoing analysis of the rankers, their ranking schemes, and associated effects. Questions remain, though, about how much light will be shed on the nature of university rankings schemes, what proposals or alternatives might emerge, and how the various currents of thought in Europe converge or diverge as some consensus is sought. Some institutions in Europe are actually happy that this ‘new reality’ has emerged for it is perceived to facilitate the ‘modernization’ of universities, enhance transparency at an intra-university scale, and elevate the role of the European Commission in European higher education development dynamics. Yet others equate rankings and classification schema with neoliberalism, commodification, and Americanization: this partly explains the ongoing critiques of the typology initiatives I linked to above, which are, to a degree, inspired by the German Excellence initiative, which is in turn partially inspired by a vision of what the US higher education system is.

Regardless, the rankings topic is not about to disappear. Let us hope that the controversies, debates, and research (current and future) inspire coordinated and rigorous European initiatives that will shed more light on this new form of defacto global governance. Why? If Europe does not do it, no one else will, at least in a manner that recognizes the diverse contributions that higher education can and should make to development processes at a range of scales.

Kris Olds

23 July update: see here for a review of a 2 juillet 2008 French Senate proposal to develop a new European ranking system that better reflects the nature of knowledge production (including language) in France and Europe more generally.  The full report (French only) can be downloaded here, while the press release (French only) can be read here.  France is, of course, going to publish a Senate report in French, though the likely target audience for the broader message (including a critique of the Shanghai Jiao Tong University’s Academic Ranking of World Universities) only partially understands French.  Yet in some ways it would have been better to have the report released simultaneously in both French and English.  But the contradictions of France critiquing dominant ranking schemes for their bias towards the English language, in English, was likely too much to take. In the end though, the French critique is well worth considering, and I can’t help but think that the EU or one of the many emerging initiatives above would be wise to have the report immediately translated and placed on some relevant websites so that it can be downloaded for review and debate.

Reactions to the ranking of universities: is Malaysia over-reacting?

thesqscover.jpgI have had a chance to undertake a quick survey among colleagues in other countries regarding reactions to the UK’s Times Higher World University Rankings 2007 in their respective countries.

A colleague in the UK noted that as one might expect from the home of one of the more notorious world rankings, and a higher education system obsessed with reputation, ‘league tables’ are much discussed in the UK. The UK government, specifically, the Higher Education Funding Council for England (HEFCE), as noted last week, has commissioned a major research into five ranking systems and their impact on higher education institutions in England. In other words, the UK government is very concerned with the whole business of ranking of universities, for the reputation of the UK as a global centre for higher education is at stake.

Another colleague reported that, among academics in the UK, that the reaction to the Times Higher rankings varies widely. Many people working in higher education are deeply sceptical and cynical about the value of such league tables, about their value, purpose and especially methodology. For the majority of UK universities that do not appear in the tables and are probably never likely to appear, the tables are of very little significance. However, for the main research-led universities they are a source of growing interest. These are the universities that see themselves as competing on the world stage. Whilst they will often criticise the methodologies in detail, they will still study the results very carefully and will certainly use good results for publicity and marketing. Several leading UK universities (e.g., Warwick) now have explicit targets, for example, to be in the top 25 or 50 by a particular year, and are developing strategies with this in mind. However, it is reported that most UK students pay little attention to the international tables, but universities are aware that rankings can have a significant impact on recruitment of international students.

In Hong Kong, the Times Higher rankings has been seriously discussed in both the media and by university presidents (some of whom received higher rankings this year, thus making it easier to request increased funding from government based on their success). Among scholars/academics, especially those familiar with the various university ranking systems (the Times Higher rankings and others, like the Shanghai Jiaotong University rankings), there is some scepticism, especially concerning the criteria used.

Rankings are a continuous source of debate in the Australian system, no doubt as a result of Australia’s strong focus on the international market. Both the Times Higher rankings and the recent one undertaken by the Melbourne Institute have resulted in quite strong debate, spurred by Vice Chancellors whose institutions do not score in the top.

In Brazil, it is reported that ranking of universities did not attract media attention and public debate for the very reason that university rankings have had no impact on the budgetary decision of the government. The more relevant issue in the higher education agenda in Brazil is social inclusion, thus public universities are rewarded by their plans for extending access to their undergraduate programs, especially if it includes large number of students per faculty. Being able to attract foreign students is secondary in nature to many universities. Thus, public universities have had and continue to have assured access to budget streams that reflects the Government’s historical level of commitment.

A colleague in France noted that the manner Malaysia, especially the Malaysian Cabinet of Ministers and the Parliament, reacted to Times Higher rankings is relatively harsh. It appears that, in the specific case of Malaysia, the ranking outcome is being used by politicians to ‘flog’ senior officials governing higher education systems and/or universities. And yet critiques of such ranking schemes and their methodologies (e.g., via numerous discussions in Malaysia, or via the OECD or University Ranking Watch) go unnoticed. Malaysia better watch out, as the world is indeed watching us.

Morshidi Sirat