The Business Side of World University Rankings

Over the last two years I’ve made the point numerous times here that world university rankings have become normalized on an annual cycle, and function as data acquisition mechanisms to drill deep into universities but in a way that encourages (seduces?) universities to provide the data for free. In reality, the data is provided at a cost given that the staff time allocated to produce the data needs to be paid for, and allocating staff time this way generates opportunity costs.

See below for the latest indicator of the business side of world university rankings. Interestingly today’s press release from Thomson Reuters (reprinted in full) makes no mention of world university rankings, nor Times Higher Education, the media outlet owned by TSL Education, which was itself acquired by Charterhouse Capital Partners in 2007. Recall that it was that Times Higher Education began working with Thomson Reuters in 2010.

The Institutional Profiles™ that are being marketed here derive data from “a combination of citation metrics from Web of KnowledgeSM, biographical information provided by institutions, and reputational data collected by Thomson Reuters Academic Reputation Survey,” all of which (apart form the citation metrics) come to the firm via the ‘Times Higher Education World University Rankings (powered by Thomson Reuters).’

Of course there is absolutely nothing wrong with providing services (for a charge) to enhance the management of universities, but would most universities (and their funding agencies) agree, from the start, to the establishment of a relationship where all data is provided for free to a centralized private authority headquartered in the US and UK, and then have this data both managed and monetized by the private authority? I’m not so sure.

This is arguably another case of universities thinking for themselves and not looking at the bigger picture. We have a nearly complete absence of collective action on this kind of developmental dynamic; one worthy of greater attention, debate, and oversight if not formal governance.

Kris Olds

<><><><>

12 Apr 2012

Thomson Reuters Improves Measurement of Universities’ Performance with New Data on Faculty Size, Reputation, Funding and Citation Measures

Comprehensive data now available in Institutional Profiles for universities such as Princeton, McGill, Nanyang Technological, University of Hong Kong and others

Philadelphia, PA, April 12, 2012 – The Intellectual Property & Science business of Thomson Reuters today announced the availability of 138 percent more performance indicators and nearly 20 percent more university data within Institutional Profiles™, the company’s online resource covering more than 500 of the world’s leading academic research institutions. This new data enables administrators and policy makers to reliably measure their institution’s performance and make international comparisons.

Using a combination of citation metrics from Web of KnowledgeSM, biographical information provided by institutions, and reputational data collected by Thomson Reuters Academic Reputation Survey, Institutional Profiles provides details on faculty size, student body, reputation, funding, and publication and citation data.

Two new performance indicators were also added to Institutional Profiles: International Diversity and Teaching Performance. These measure the global composition of staff and students, international co-authorship, and education input/output metrics, such as the ratio of students enrolled to degrees awarded in the same area. The indicators now cover 100 different areas, ensuring faculty and administrators have the most complete institutional data possible.

All of the data included in the tool has been vetted and normalized for accuracy. The latest update also includes several enhancements to existing performance indicators, such as Normalized Citation Impact. This allows for equally weighted comparisons between subject groups that have varying levels of citations.

“Institutional Profiles continues to provide answers to the questions that keep administrators up at night: ‘Beyond citation impact or mission statement, which institutions are the best collaboration partners for us to pursue? How can I understand the indicators and data that inform global rankings?’,” said Keith MacGregor, executive vice president at Thomson Reuters. “With this update, the tool provides the resources to reliably measure and compare academic and research performance in new and more complete ways, empowering strategic decision-making based on each institution’s unique needs.”

Institutional Profiles, a module within the InCites™ platform, is part of the research analytics suite of solutions provided by Thomson Reuters that supports strategic decision making and the evaluation and management of research. In addition to InCites, this suite of solutions includes consulting services, custom studies and reports, and Research in View™.

For more information, go to:
http://researchanalytics.thomsonreuters.com/institutionalprofiles/

About Thomson Reuters
Thomson Reuters is the world’s leading source of intelligent information for businesses and professionals. We combine industry expertise with innovative technology to deliver critical information to leading decision makers in the financial and risk, legal, tax and accounting, intellectual property and science and media markets, powered by the world’s most trusted news organization. With headquarters in New York and major operations in London and Eagan, Minnesota, Thomson Reuters employs approximately 60,000 people and operates in over 100 countries. For more information, go to http://www.thomsonreuters.com.

Contacts

Alyssa Velekei
Public Relations Specialist
Tel: +1 215 823 1894

On being seduced by The World University Rankings (2011-12)

Well, it’s ranking season again, and the Times Higher Education/Thomson Reuters World University Rankings (2011-2012) has just been released. The outcome is available here, and a screen grab of the Top 25 universities is available to the right. Link here for a pre-programmed Google News search for stories about the topic, and link here for Twitter-related items (caught via the #THEWUR hash tag).

Polished up further after some unfortunate fall-outs from last year, this year’s outcome promises to give us an all improved, shiny and clean result. But is it?

Like many people in the higher education sector, we too are interested in the ranking outcomes, not that there are many surprises, to be honest.

Rather, what we’d like to ask our readers to reflect on is how the world university rankings debate is configured. Configuration elements include:

  • Ranking outcomes: Where is my university, or the universities of country X, Y, and Z, positioned in a relative sense (to other universities/countries; to peer universities/countries; in comparison to last year; in comparison to an alternative ranking scheme)?
  • Methods: Is the adopted methodology appropriate and effective? How has it changed? Why has it changed?
  • Reactions: How are key university leaders, or ministers (and equivalents) reacting to the outcomes?
  • Temporality: Why do world university rankers choose to release the rankings on an annual basis when once every four or five years is more appropriate (given the actual pace of change within universities)? How did they manage to normalize this pace?
  • Power and politics: Who is producing the rankings, and how do they benefit from doing so? How transparent are they themselves about their operations, their relations (including joint ventures), their biases, their capabilities?
  • Knowledge production: As is patently evident in our recent entry ‘Visualizing the uneven geographies of knowledge production and circulation,’ there is an incredibly uneven structure to the production of knowledge, including dynamics related to language and the publishing business.  Given this, how do world university rankings (which factor in bibliometrics in a significant way) reflect this structural condition?
  • Governance matters: Who is governing whom? Who is being held to account, in which ways, and how frequently? Are the ranked capable of doing more than acting as mere providers of information (for free) to the rankers? Is an effective mechanism needed for regulating rankers and the emerging ranking industry? Do university leaders have any capability (none shown so far!) to collaborate on ranking governance matters?
  • Context(s): How do schemes like the THE’s World University Rankings, the Academic Ranking of World Universities (ARWU), and the QS World University Rankings, relate to broader attempts to benchmark higher education systems, institutions, and educational and research practices or outcomes? And here we flag the EU’s new U-Multirank scheme, and the OECD’s numerous initiatives (e.g., AHELO) to evaluate university performance globally, as well as engender debate about benchmarking too. In short, are rankings like the ones just released ‘fit for purpose’ in genuinely shed light on the quality, relevance and efficiency of higher education in a rapidly-evolving global context?

The Top 400 outcomes will and should be debated, and people will be curious about the relative place of their universities in the ranked list, as well as about the welcome improvements evident in the THE/Thomson Reuters methodology. But don’t be invited into distraction and only focus on some of these questions, especially those dealing with outcomes, methods, and reactions.

Rather, we also need to ask more hard questions about power, governance, and context, not to mention interests, outcomes, and potential collateral damage to the sector (when these rankings are released and then circulate into national media outlets, and ministerial desktops). There is a political economy to world university rankings, and these schemes (all of them, not just the THE World University Rankings) are laden with power and generative of substantial impacts; impacts that the rankers themselves often do not hear about, nor feel (e.g., via the reallocation of resources).

Is it not time to think more broadly, and critically, about the big issues related to the great ranking seduction?

Kris Olds & Susan Robertson

Field-specific cultures of international research collaboration

Editors’ note: how can we better understand and map out the phenomenon of international research collaboration, especially in a context where bibliometrics does a patchy job with respect to registering the activities and output of some fields/disciplines? This is one of the questions Dr. Heike Jöns (Department of Geography, Loughborough University, UK) grapples with in this informative guest entry in GlobalHigherEd. The entry draws from Dr. Jöns’ considerable experience studying forms of mobility associated with the globalization of higher education and research.

Dr. Jöns (pictured above) received her PhD at the University of Heidelberg (Germany) and spent two years as a Feodor Lynen Postdoctoral Research Fellow of the Alexander von Humboldt Foundation at the University of Nottingham (UK). She is interested in the geographies of science and higher education, with particular emphasis on transnational academic mobility.

Further responses to ‘Understanding international research collaboration in the social sciences and humanities’, and Heike Jöns’ response below, are welcome at any time.

Kris Olds & Susan Robertson

~~~~~~~~~~~~~~~~~~~~~

The evaluation of research performance at European universities increasingly draws upon quantitative measurements of publication output and citation counts based on databases such as ISI Web of Knowledge, Scopus and Google Scholar (UNESCO 2010). Bibliometric indicators also inform annually published world university rankings such as the Shanghai and Times Higher Education rankings that have become powerful agents in contemporary audit culture despite their methodological limitations. Both league tables introduced field-specific rankings in 2007, differentiating between the natural, life, engineering and social sciences (both rankings), medicine (Shanghai) and the arts and humanities (Times Higher).

But to what extent do bibliometric indicators represent research output and collaborative cultures in different academic fields? This blog entry responds to this important question raised by Kris Olds (2010) in his GlobalHigherEd entry titled ‘Understanding international research collaboration in the social sciences and humanities‘ by discussing recent findings on field-specific research cultures from the perspective of transnational academic mobility and collaboration.

The inadequacy of bibliometric data for capturing research output in the arts and humanities has, for example, been demonstrated by Anssi Paasi’s (2005) study of international publishing spaces. Decisions about the journals that enter the respective databases, their bias towards English-language journals and their neglect of monographs and anthologies that dominate in fields dominated by individual authorship are just a few examples for the reasons of why citation indexes are not able to capture the complexity, place- and language-specificity of scholarship in the arts and humanities. Mapping the international publishing spaces in the sciences, the social sciences and the arts and humanities using ISI Web of Science data in fact suggests that the arts and humanities are less international and even more centred on the United States and Europe than the sciences (Paasi 2005: 781). Based on the analysis of survey data provided by 1,893 visiting researchers in Germany in the period 1954 to 2000, this GlobalHigherEd entry aims to challenge this partial view by revealing the hidden dimensions of international collaboration in the arts and humanities and elaborating on why research output and collaborative cultures vary not only between disciplines but also between different types of research work (for details, see Jöns 2007; 2009).

The visiting researchers under study were funded by the Humboldt Research Fellowship Programme run by the Alexander von Humboldt Foundation (Bonn, Germany). They came to Germany in order to pursue a specific research project at one or more host institutions for about a year. Striking differences in collaborative cultures by academic field and type of research work are revealed by the following three questions:

1. Could the visiting researchers have done their research project also at home or in any other country?

2. To what extent did the visiting researchers write joint publications with colleagues in Germany as a result of their research stay?

3. In which ways did the collaboration between visiting researchers and German colleagues continue after the research stay?

On question 1.

Research projects in the arts and humanities, and particularly those that involved empirical work, were most often tied to the research context in Germany. They were followed by experimental and theoretical projects in engineering and in the natural sciences, which were much more frequently possible in other countries as well (Figure 1).

Figure 1 — Possibility of doing the Humboldt research project in another country than Germany, 1981–2000 (Source: Jöns 2007: 106)

These differences in place-specificity are closely linked to different possibilities for mobilizing visiting researchers on a global scale. For example, the establishment of new research infrastructure in the physical, biological and technical sciences can easily raise scientific interest in a host country, whereas the mobilisation of new visiting researchers in the arts and humanities remains difficult as language skills and cultural knowledge are often necessary for conducting research projects in these fields. This is one reason for why the natural and technical sciences appear to be more international than the arts and humanities.

On question 2.

Joint publications with colleagues in Germany were most frequently written in physics, chemistry, medicine, engineering and the biological sciences that are all dominated by multi-authorship. Individual authorship was more frequent in mathematics and the earth sciences and most popular – but with considerable variations between different subfields – in the arts and humanities. The spectrum ranged from every second economist and social scientist, who wrote joint publications with colleagues in Germany, via roughly one third in language and cultural studies and history and every fifth in law to only every sixth in philosophy. Researchers in the arts and humanities had much more often than their colleagues from the sciences stayed in Germany for study and research prior to the Humboldt research stay (over 95% in the empirical arts and humanities compared to less than 40% in the theoretical technical sciences) as their area of specialisation often required learning the language and studying original sources or local research subjects. They therefore engaged much more closely with German language and culture than natural and technical scientists but due to the great individuality of their work, they produced not only considerably less joint publications than their apparently more international colleagues but their share of joint publications with German colleagues before and after the research stay was fairly similar (Figure 2).

Figure 2 — Joint publications of Humboldt research fellows and colleagues in Germany, 1981–2000 (Source: Jöns 2007: 107)

For these reasons, internationally co-authored publications are not suitable for evaluating the international attractiveness and orientation of different academic fields, particularly because the complexity of different types of research practices in one and the same discipline makes it difficult to establish typical collaborative cultures against which research output and collaborative linkages could be judged.

On question 3.

This is confirmed when examining continued collaboration with colleagues in Germany after the research stay. The frequency of continued collaboration did not vary significantly between disciplines but the nature of these collaborations differed substantially. Whereas regular collaboration in the natural and technical sciences almost certainly implied the publication of multi-authored articles in internationally peer-reviewed journals, continued interaction in the arts and humanities, and to a lesser extent in the social sciences, often involved activities beyond the co-authorship of journal articles. Table 1 documents some of these less well-documented dimensions of international research collaboration, including contributions to German-language scientific journals and book series as well as refereeing for German students, researchers and the funding agencies themselves.



Table 1 — Activities of visiting researchers in Germany after their research stay (in % of Humboldt research fellows 1954-2000; Source: Jöns 2009: 327)

The differences in both place-specificity and potential for co-authorship in different research practices can be explained by their particular spatial ontology. First, different degrees of materiality and immateriality imply varying spatial relations that result in typical patterns of place-specificity and ubiquity of research practices as well as of individual and collective authorship. Due to the corporeality of researchers, all research practices are to some extent physically embedded and localised. However, researchers working with physically embedded material research objects that might not be moved easily, such as archival material, field sites, certain technical equipment, groups of people and events, may be dependent on accessing a particular site or local research context at least once. Those scientists and scholars, who primarily deal with theories and thoughts, are in turn as mobile as the embodiment of these immaterialities (e.g., collaborators, computers, books) allows them to be. Theoretical work in the natural sciences, including, for example, many types of mathematical research, thus appears to be the most ‘ubiquitous’ subject: Its high share of immaterial thought processes compared to relatively few material resources involved in the process of knowledge production (sometimes only pen and paper) would often make it possible, from the perspective of the researchers, to work in a number of different places (Figure 1, above).

Second, the constitutive elements of research vary according to their degree of standardisation. Standardisation results from the work and agreement previously invested in the classification and transformation of research objects. A high degree of standardisation would mean that the research practice relies on many uniform terms, criteria, formulas and data, components and materials, methods, processes and practices that are generally accepted in the particular field of academic work. Field sites, for example, might initially show no signs of standardisation, whereas laboratory equipment such as test tubes may have been manufactured on the basis of previous – and then standardised – considerations and practices. The field site may be unique, highly standardised laboratory equipment may be found at several sites to which the networks of science have been extended, thereby offering greater flexibility in the choice of the research location. In regard to research practices with a higher degree of immateriality, theoretical practices in the natural and technical sciences show a higher degree of standardisation (e.g., in terms of language) when compared to theoretical and argumentative-interpretative work in the arts and humanities and thus are less place-specific and offer more potential for co-authorship (Figures 1 and 2).

The resulting two dimensional matrix on the spatial relations of different research practices accommodates the empirically observed differences of both the place-specificity of the visiting researchers’ projects and their resulting joint publications with colleagues in Germany (Figure 3):

Figure 3 — A two-dimensional matrix on varying spatial relations of different research practices (Source: Jöns 2007: 109)

Empirical work, showing a high degree of materiality and a low degree of standardisation, is most often dependent on one particular site, followed by argumentative-interpretative work, which is characterised by a similar low degree of standardisation but a higher degree of immateriality. Experimental (laboratory) work, showing a high degree of both materiality and standardisation, can often be conducted in several (laboratory) sites, while theoretical work in the natural sciences, involving both a high degree of immateriality and standardisation is most rarely tied to one particular site. The fewest joint publications were written in argumentative-interpretative work, where a large internal (immaterial) research context and a great variety of arguments from different authors in possibly different languages complicate collaboration on a specific topic. Involving an external (material) and highly standardised research context, the highest frequency of co- and multi-authorship was to be found in experimental (laboratory) work. In short, the more immaterial and standardised the research practice, the lower is the place-specificity of one’s work and the easier it would be to work at home or elsewhere; and the more material and standardised the research practice, the more likely is collaboration through co- and multi-authorship.

Based on this work, it can be concluded – in response to two of Kris Olds’ (2010) key questions – that international research collaboration on a global scale can be mapped – if only roughly – for research practices characterised by co- and multi-authorship in internationally peer-reviewed English language journals as the required data is provided by citation databases (e.g., Wagner and Leydesdorff 2005; Adams et al. 2007; Leydesdorff and Persson 2010; Matthiessen et al. 2010; UNESCO 2010). When interpreting such mapping exercises, however, one needs to keep in mind that the data included in ISI Web of Knowledge, Scopus and Google Scholar do itself vary considerably.

Other research practices require different research methods such as surveys and interviews and thus can only be mapped from specific perspectives such as individual institutions or groups of researchers (for the application of bibliometrics to individual journals in the arts and humanities, see Leydesdorff and Salah 2010). It might be possible to create baseline studies that help to judge the type and volume of research output and international collaboration against typical patterns in a field of research but the presented case study has shown that the significance of specific research locations, of individual and collective authorship, and of different types of transnational collaboration varies not only between academic fields but also between research practices that crisscross conventional disciplinary boundaries.

In the everyday reality of departmental research evaluation this means that in fields such as geography, a possible benchmark of three research papers per year may be easily produced in most fields of physical geography and some fields of human geography (e.g. economic and social) whereas the nature of research practices in historical and cultural geography, for example, might make it difficult to maintain such a high research output over a number of subsequent years. Applying standardised criteria of research evaluation to the great diversity of publication and collaboration cultures inevitably bears the danger of leading to a standardisation of academic knowledge production.

Heike Jöns

References

Adams J, Gurney K and Marshall S 2007 Patterns of international collaboration for the UK and leading partners Evidence Ltd., Leeds

Jöns H 2007 Transnational mobility and the spaces of knowledge production: a comparison of global patterns, motivations and collaborations in different academic fields Social Geography 2 97-114  Accessed 23 September 2010

Jöns H 2009 ‘Brain circulation’ and transnational knowledge networks: studying long-term effects of academic mobility to Germany, 1954–2000 Global Networks 9 315-38

Leydesdorff L and Persson O 2010 Mapping the geography of science: distribution patterns and networks of relations among cities and institutes Journal of the American Society for Information Science & Technology 6 1622-1634

Leydesdorff L and Salah A A A 2010 Maps on the basis of the Arts &Humanities Citation Index: the journals Leonardo and Art Journal, and “Digital Humanities” as a topic Journal of the American Society for Information Science and Technology 61 787-801

Matthiessen C W, Schwarz A W and Find S 2010 World cities of scientific knowledge: systems, networks and potential dynamics. An analysis based on bibliometric indicators Urban Studies 47 1879-97

Olds K 2010 Understanding international research collaboration in the social sciences and humanities GlobalHigherEd 20 July 2010  Accessed 23 September 2010

Paasi A 2005 Globalisation, academic capitalism, and the uneven geographies of international journal publishing spaces Environment and Planning A 37 769-89

UNESCO 2010 World Social Science Report: Knowledge Divides UNESCO, Paris

Wagner C S and Leydesdorff L 2005 Mapping the network of global science: comparing international co-authorships from 1990 to 2000 International Journal of Technology and Globalization 1 185–208


Governing world university rankers: an agenda for much needed reform

Is it now time to ensure that world university rankers are overseen, if not governed, so as to achieve better quality assessments of the differential contributions of universities in the global higher education and research landscape?

In this brief entry we make a case that something needs to be done about the system in which world university rankers operate. We have two brief points to make about why something needs to be done, and then we outline some options for moving beyond today’s status quo situation.

First, while both universities and rankers are all interested in how well universities are positioned in the emerging global higher education landscape, power over the process, as currently exercised, rests solely with the rankers. Clearly firms like QS and Times Higher Education are open to input, advice, and indeed critique, but in the end they, along with information services firms like Thomson Reuters, decide:

  • How the methodology is configured
  • How the methodology is implemented and vetted
  • When and how the rankings outcomes are released
  • Who is permitted access to the base data
  • When and how errors are corrected in rankings-related publications
  • What lessons are learned from errors
  • How the data is subsequently used

Rankers have authored the process, and universities (not to mention associations of universities, and ministries of education) have simply handed over the raw data. Observers of this process might be forgiven for thinking that universities have acquiesced to the rankers’ desires with remarkably little thought. How and why we’ve ended up in such a state of affairs is a fascinating (if not alarming) indicator of how fearful many universities are of being erased from increasingly mediatized viewpoints, and how slow universities and governments have been in adjusting to the globalization of higher education and research, including the desectoralization process. This situation has some parallels with the ways that ratings agencies (e.g., Standard and Poor’s or Moody’s) have been able to operate over the last several decades.

Second, and as has been noted in two of our recent entries:

the costs associated with providing rankers (especially QS and THE/Thomson Reuters) with data are increasing concentrated on universities.

On a related note, there is no rationale for the now annual rankings cycle that the rankers have been successfully been able to normalize. What really changes on a year-to-year basis apart from changes in ranking methodologies? Or, to paraphrase Macquarie University’s vice-chancellor, Steven Schwartz, in this Monday’s Sydney Morning Herald:

“I’ve never quite adjusted myself to the idea that universities can jump around from year to year like bungy jumpers,” he says.

”They’re like huge oil tankers; they take forever to turn around. Anybody who works in a university realises how little they change from year to year.”

Indeed if the rationale for an annual cycle of rankings were so obvious, government ministries would surely facilitate more annual assessment exercises. Even the most managerial and bibliometric-predisposed of governments anywhere – in the UK – has spaced its intense research assessment exercise out over a 4-6 year cycle. And yet the rankers have universities on the run. Why? Because this cycle facilitates data provision for commercial databases, and it enables increasingly competitive rankers to construct their own lucrative markets. This, perhaps, explains this 6 July 2010 reaction, from QS to a call for a four vs one year rankings cycle in GlobalHigherEd:

Thus we have a situation where rankers seeking to construct media/information service markets are driving up data provision time and costs for universities, facilitating continual change in methodologies, and as a matter of consequence generating some surreal swings in ranked positions. Signs abound that rankers are driving too hard, taking too many risks, while failing to respect universities, especially those outside of the upper echelon of the rank orders.

Assuming you agree that something should happen, the options for action are many. Given what we know about the rankers, and the universities that are ranked, we have developed four options, in no order of priority, to further discussion on this topic. Clearly there are other options, and we welcome alternative suggestions, as well as critiques of our ideas below.

The first option for action is the creation of an ad-hoc task force by 2-3 associations of universities located within several world regions, the International Association of Universities (IAU), and one or more international consortia of universities. Such an initiative could build on the work of the European University Association (EAU) which created a regionally-specific task force in early 2010. Following an agreement to halt world university rankings for two years (2011 & 2012), this new ad-hoc task force could commission a series of studies regarding the world university rankings phenomenon, not to mention the development of alternative options for assessing, benchmarking and comparing higher education performance and quality. In the end the current status quo regarding world university rankings could be sanctioned, but such an approach could just as easily lead to new approaches, new analytical instruments, and new concepts that might better shed light on the diverse impacts of contemporary universities.

A second option is an inter-governmental agreement about the conditions in which world university rankings can occur. This agreement could be forged in the context of bi-lateral relations between ministers in select countries: a US-UK agreement, for example, would ensure that the rankers reform their practices. A variation on this theme is an agreement of ministers of education (or their equivalent) in the context of the annual G8 University Summit (to be held in 2011), or the next Global Bologna Policy Forum (to be held in 2012) that will bring together 68+ ministers of education.

The third option for action is non-engagement, as in an organized boycott. This option would have to be pushed by one or more key associations of universities. The outcome of this strategy, assuming it is effective, is the shutdown of unique data-intensive ranking schemes like the QS and THE world university rankings for the foreseeable future. Numerous other schemes (e.g., the new High Impact Universities) would carry on, of course, for they use more easily available or generated forms of data.

A fourth option is the establishment of an organization that has the autonomy, and resources, to oversee rankings initiatives, especially those that depend upon university-provided data. There are no such organizations in existence for the only one that is even close to what we are calling for (the IREG Observatory on Academic Ranking and Excellence) suffers from the inclusion of too many rankers on its executive committee (a recipe for serious conflicts of interest), and member fees for a significant portion of its budget (ditto).

In closing, the acrimonious split between QS and Times Higher Education, and the formal inclusion of Thomson Reuters into the world university ranking world, has elevated this phenomenon to a new ‘higher-stakes’ level. Given these developments, given the expenses associated with providing the data, given some of the glaring errors or biases associated with the 2010 rankings, and given the problems associated with using university-scaled quantitative measures to assess ‘quality’ in a relative sense, we think it is high time for some new forms of action. And by action we don’t mean more griping about methodology, but attention to the ranking system that universities are embedded in, yet have singularly failed to construct.

The current world university rankings juggernaut is blinding us, yet innovative new assessment schemes — schemes that take into account the diversity of institutional geographies, profiles, missions, and stakeholders — could be fashioned if we take pause. It is time to make more proactive decisions about just what types of values and practices should be underlying comparative institutional assessments within the emerging global higher education landscape.

Kris Olds, Ellen Hazelkorn & Susan Robertson

A case for free, open and timely access to world university rankings data

Well, the 2010 QS World University Rankings® were released last week and the results are continuing to generate considerable attention in the world’s media (link here for a pre-programmed Google news search of coverage).

For a range of reasons, news that QS placed Cambridge in the No. 1 spot, above Harvard, spurred on much of this media coverage (see, for example, these stories in Time, the Christian Science Monitor, and Al Jazeera). As Al Jazeera put it: “Did the Earth’s axis shift? Almost: Cambridge has nudged Harvard out of the number one spot on one major ranking system.”

Interest in the Cambridge over Harvard outcome led QS (which stands for QS Quacquarelli Symonds Ltd) to release this story (‘2010 QS World University Rankings® – Cambridge strikes back’). Do note, however, that Harvard scored 99.18/100 while QS gave Cambridge 100/100 (hence the 1/2 placing). For non-rankings watchers, Harvard had been pegged as No 1 for the previous five years in rankings that QS published in association with Times Higher Education.

As the QS story notes, the economic crisis in the US, as well as the reduction of other US universities with respect to their share of “international faculty,” was the main cause of Harvard’s slide:

In the US, cost-cutting reductions in academic staff hire are reflected among many of the leading universities in this year’s rankings. Yale also dropped 19 places for international faculty, Chicago dropped 8, Caltech dropped 20, and UPenn dropped 53 places in this measure. However, despite these issues the US retains its dominance at the top of the table, with 20 of the top 50 and 31 of the top 100 universities in the overall table.

Facts like these aside, what we would like to highlight is that all of this information gathering and dissemination — both the back-end (pre-ranking) provision of the data, and the front end (post-ranking) acquisition of the data — focuses the majority of costs on the universities and the majority of benefits on the rankers.

The first cost to universities is the provision of the data. As one of us noted in a recent entry (‘Bibliometrics, global rankings, and transparency‘):

Data demands are becoming very resource consuming for universities. For example, the QS template currently being dealt with by universities around the world shows 14 main categories with sub-categories for each: all together there are 60 data fields, of which 10 are critical to the QS ranking exercise, to be launched in October 2010. Path dependency dynamics clearly exist for once the pipelines are laid the complexity of data requests can be gradually ramped up.

Keep it mind that the data is provided for free, though in the end it is a cost primarily borne by the taxpayer (for most universities are public). It is the taxpayer that pays the majority of the administrators’ salaries to enable them to compile the data and submit it to the rankers.

A second, though indirect and obscured cost, relates to the use of rankings data by credit rating agencies like Moody’s or Standards and Poors in their ratings of the credit-worthiness of universities. We’ve reported on this in earlier blog entries (e.g., ‘‘Passing judgment’: the role of credit rating agencies in the global governance of UK universities‘). Given that cost of borrowing for universities is determined by their credit-worthiness, and rankings are used in this process, we can conclude that any increase in the cost of borrowing is actually also an increase in the cost of the university to the taxpayer.

Third, rankings can alter the views of people (students, faculty, investors) making decisions about mobility or resource allocation, and these decisions inevitably generate direct financial consequences for institutions and host city-regions. Given this it seems only fair that universities and city-region development agencies should be able to freely use the base rankings data for self-reflection and strategic planning, if they so choose to.

A fourth cost is subsequent access to the data. The rankings are released via a strategically planned media blitz, as are hints at causes for shifts in the placement of universities, but access to the base data — the data our administrative colleagues in universities in Canada, the US, the UK, Sweden, etc., supplied to the rankers — is not fully enabled.  Rather, this freely provided data is used as the basis for:

the development of ancillary services and benchmarking capabilities that can be sold back to universities, funding councils, foundations, regional organizations (e.g., the European Commission which is intensely involved in benchmarking and now bankrolling a European ranking scheme), and the like.

Consider, for example, this Thomson Reuters statement on their Global Institutional Profiles Project website:

The first use of the data generated in the Global Institutional Profiles Project was to inform the Times Higher Education World University Ranking. However, there are many other services that will rely on the Profiles Project data. For example the data can be used to inform customized analytical reporting or customized data sets for a specific customer’s needs.

Thomson Reuters is developing a platform designed for easy access and interpretation of this valuable data set. The platform will combine different sets of key indicators, with peer benchmarking and visualization tools to allow users to quickly identify the key strengths of institutions across a wide variety of aspects and subjects.

Now, as QS’s Ben Sowter put it:

Despite the inevitable efforts that will be required to respond to a wide variety of enquiries from academics, journalists and institutions over the coming days there is always a deep sense of satisfaction when our results emerge. The tension visibly lifts from the team as we move into a new phase of our work – that of explaining how and why it works as opposed to actually conducting the work.

This year has been the most intense yet, we have grown the team and introduced a new system, introduced new translations of surveys, spent more time poring over the detail in the Scopus data we receive, sent out the most thorough fact files yet to universities in advance of the release – we have driven engagement to a new level – evaluating, speaking to and visiting more universities than ever.

The point we would like to make is that the process of taking “engagement to a new level” — a process coordinated and enabled by QS Quacquarelli Symonds Ltd and Times Higher Education/Thomson Reuters — is solely dependent upon universities being willing to provide data to these firms for free.

Given all of these costs, access to all of the base data beyond the simple rankings available on websites like the THE World University Rankings 2010 (due out on 16 September), or QS World University Rankings Results 2010, should be freely accessible to all.

Detailed information should also be provided about which unit, within each university, provided the rankers with the data. This would enable faculty, students and staff within ranked institutions to engage in dialogue about ranking outcomes, methodologies, and so on, should they choose to. This would also prevent confusing mix-ups such as what occurred at the University of Waterloo (UW) this week when:

UW representative Martin van Nierop said he hadn’t heard that QS had contacted the university, even though QS’s website says universities are invited to submit names of employers and professors at other universities to provide opinions. Data analysts at UW are checking the rankings to see where the information came from.

And access to this data should be provided on a timely basis, as in exactly when the rankings are released to the media and the general public.

In closing, we are making a case for free, open and timely access to all world university rankings data from January 2011, ideally on a voluntary basis. Alternative mechanisms, including intergovernmental agreements in the context of the next Global Bologna Policy Forum (in 2012), could also facilitate such an outcome.

If we have learned anything to date about the open access debate, and ‘climategate’, greater transparency helps everyone — the rankers (who will get more informed and timely feedback about their adopted methodologies), universities (faculty, students & staff), scholars and students interested in the nature of ranking methodologies, government ministries and departments, and the taxpayers who support universities (and hence the rankers).

Inspiration for this case comes from many people, as well as the open access agenda that is partly driven on the principle that taxpayer funded research generates research outcomes that society should have free and open access to, and in a timely fashion.  Surely this open access principle applies just as well to university rankings data!

Another reason society deserves to have free, open and timely access to the data is that a change in practices will shed light on how the organizations ranking universities implement their methodologies; methodologies that are ever changing (and hence more open to error).

Finer-grained access to the data would enable us to check out exactly why, for example, Harvard deserved a 99.18/100 while Cambridge was allocated a 100/100. As professors who mark student papers, outcomes this close lead us to cross-check the data, lest we subtly favour one student over another for X, Y or Z reasons. And cross-checking is even more important given that ranking is a highly mediatized phenomenon, as is clearly evident this week betwixt and between releases of the hyper-competitive QS vs THE world university rankings.

Free, open and timely access to the world university rankings data is arguably a win-win-win scenario, though it will admittedly rebalance the current focus of the majority of the costs on the universities, and the majority of the benefits on the rankers. Yet it is in the interest of the world’s universities, and the taxpayers who support these universities, for this to happen.

Kris Olds & Susan Robertson

Understanding international research collaboration in the social sciences and humanities

How can we map out and make sense of the changing nature of research collaboration at a global scale? This is an issue many people and institutions are grappling with, with no easy solutions.

As noted in several previous GlobalHigherEd entries:

collaboration between researchers across space is clearly increasing, as well being increasingly sought after. From a sense that ‘global challenges’ like climate change demand collaboration, through to a sense that international collaboration generates higher impact (in a citation impact factor sense) output, there are signs that the pressure to facilitate collaboration will only increase.

At the same time, however, government ministries, funding councils, higher education associations, and universities themselves, are all having a challenging time making sense of the changing nature of research collaboration across space. Common questions include:

  • Can this phenomenon be mapped out, and if so how and at what scales?
  • Can baseline studies be created such that the effects of new international collaborative research programs can be measured?
  • What happens to research practices and collaborative relations when universities join international consortia of universities?

One option is the use of bibliometric technologies to map out the changing nature of research collaboration across space. For example, the international linkages of the Australian Group of Eight (Go8) universities were mapped out (see some sample images below from the report Thomson ISI Go8 NCR dataset: Go8 International Collaborations, available via this University of Sydney website).

Other reports like Science-Metrix’s Scientific Collaboration between Canada and California: A Bibliometric Study (2008) used similar forms of data to understand collaboration between a country and a foreign state. I’ve also seen similar types of bibliometric-reliant reports while participating in discussions at Worldwide University Network (WUN) meetings, as well as on Thomson Reuters’ own website.

Another option is to take an institutionally-specific perspective, though via the acquisition and analysis of a broader array of forms of data. This type of mapping can be developed via bibliometric technologies, researcher surveys, an analysis of travel expense claim data, an analysis of media ‘expertise’ data bases maintained by universities, and so on. This is an oft-desired form of analysis; one designed to feed into central repositories of knowledge (e.g., the University of Notre Dame is developing such a site, tentatively called Global ND). Yet such an approach is challenging and resource consuming to implement.

In the end, for a range of reasons, bibliometrics are often the fallback tool to map out international collaboration. Bibliometrics have their important uses, of course, but they are not effective in capturing the research practices of all research scholars, especially those in the humanities and some arms of the social sciences.

Why? Well the main reason is different disciplines have different publishing practices, an issue funding councils like the Social Sciences and Humanities Research Council of Canada (SSHRC), or European agencies (including DFG, ESRC, AHRC, NWO, ANR and ESF) have recently been exploring. See for example, this March 2010 ESF report (Towards a Bibliometric Database for the Social Sciences and Humanities – A European Scoping Project), or Bibliometric Analysis of Research Supported by SSHRC: Design Study (March 2009) – a report for SSHRC by Science-Metrix.

If we go down the mapping route and rely too heavily upon bibliometrics, do we risk of letting the limitations of Thomson Reuters’ ISI Web of Knowledge, or the Scopus database, slowly and subtly create understandings of international collaboration that erase from view some very important researcher-to-researcher collaborations in the humanities, as well as some of the social sciences? Perhaps so, perhaps not!

In this context I am in search of some assistance.

If you or your colleagues have developed some insightful ways to map out international research collaboration patterns and trends in the social sciences and humanities, whatever the scale, please fill me in via <kolds@wisc.edu> or via the comments section below. Or one alternative response is to reject the whole idea of mapping, bibliometrics, and so on, and its associated managerialism. In any case, following a compilation of responses, and some additional research, I’ll share the findings via a follow-up entry in late August.

Thank you!

Kris Olds

Are we witnessing the denationalization of the higher education media?

The denationalization of higher education – the process whereby developmental logics, frames, and practices, are increasingly associated with what is happening at a larger (beyond the nation) scale continues apace. As alluded to in my last two substantive entries:

this process is being shaped by new actors, new networks, new rationalities, new technologies, and new temporal rhythms. Needless to say, this development process is also generating a myriad of impacts and outcomes, some welcome, and some not.

While the denationalization process is a phenomenon that is of much interest to policy-making institutions (e.g., the OECD), foundations and funding councils, scholarly research networks, financial analysts, universities, and the like, I would argue that it is only now, at a relatively late stage in the game, that the higher education media is starting to take more systematic note of the contours of denationalization.

How is this happening? I will address this question by focusing in on recent changes in the English language higher education media in two key countries – the UK and the USA (though I recognize that University World News, described below, is not so simply placed).

From a quantitative and qualitative perspective, we are seeing rapid growth in the ostensibly ‘global’ coverage of the English-language higher education media from the mid-2000s on. While some outlets (e.g., the Chronicle of Higher Education) have had correspondents abroad since the 1970s, there are some noteworthy developments:

2004/2005

2007

  • University World News (UWN) launched in October. This outlet is the product of a network of journalists, many formally associated with THES, who were frustrated with the disconnect between the globalization of higher education and the narrow national focus of ‘niche’ higher education media outlets. As with IHE, UWN’s free digital-only mode enhances the ability of this outlet to reach a relatively wide range of people located throughout the world.

2009/2010

  • Chronicle of Higher Education launches a virtual Global edition (similar in style to the New York Times’ Global edition) in May. A new $2 million strategic plan leads to the ongoing hiring of more Washington DC-based editorial staff, more correspondents (to be based in Latin America, Asia, the Middle East and Europe), enhanced travel for US-based sectoral experts, and the establishment of a new weblog (WorldWise).
  • Inside Higher Ed announces it is hosting three new weblogs (GlobalHigherEd; University of Venus; The World View), all with substantial globally-themed coverage. Reporter staff time retuned, to a degree, to prioritize key global issues/processes/patterns. IHE forms collaborative relationship with Times Higher Education to cross-post selected articles on their respective web sites.
  • Times Higher Education (THE) teams up with Thomson Reuters to produce the Times Higher Education/Thomson Reuters World University Rankings (2010 on). THE continues to draw upon guest contributions from faculty about ‘global’ issues and developmental dynamics: this is partly an outcome of seeking to meet the needs and conceptual vocabulary of their faculty-dominated audience, while also controlling staff costs. The digital edition of THE International launched in July 2010.

From a temporal and technological perspective, it is clear that all of these outlets are ramping up their capacity to disseminate digital content, facilitate and/or shape debates, market themselves, and build relevant multi-scalar networks. For example, I can’t help but think about the differences between how I engaged with the THES (as it used to be called) as a Bristol-based reader in the first half of the 1990s and now. In the 1990s we would have friendly squabbles in the Geography tea room to get our hands on it so we could examine the jobs’ pages. Today, in 2010, THE staffers tweet (via @timeshighered and @THEworldunirank) dozens of times per day, and I can sit here in Madison WI and read the THE website, as well as THE International, the moment they are loaded up on the web.

While all of these higher education media outlets are seeking to enhance their global coverage, they are obviously approaching it in their own unique ways, reflective of their organizational structure and resources, the nature of their audiences, and the broader media and corporate contexts in which they are embedded.

In many ways, then, the higher education media are key players in the new global higher education landscape for they shape debates via what they cover and what they ignore. These media firms are also now able to position themselves on top of hundreds of non-traditional founts of information via Twitter sources, select weblogs (some of which they are adopting), state-supported news crawlers (e.g., Canada’s Manitoba International Education News; Netherland’s forthcoming NUFFICblog; the UK’s HE International Unit site and newsletter), cross-references to other media sources (e.g., they often profile relevant NY Times stories), and so on — a veritable BP oil well gusher of information about the changing higher education landscape. In doing so, the higher education media outlets are positioning themselves as funnels or channels of relevant (it is hoped) and timely information and knowledge.

What are we to make of the changes noted above?

In my biased view, these are positive changes on many levels for they are reflective of media outlets recognizing that the world is indeed changing, and that they have an obligation to profile and assist others in better understanding this emerging landscape. Of course these are private media firms that sell services and must make a profit in the end, but they are firms managed by people with a clear love for the complex worlds of higher education.

This said there are some silences, occlusions, and possible conflicts of interest, though not necessarily by design.

First, English is clearly the lingua franca associated with this new media landscape. This is not surprising, perhaps, given my selective focus and the structural forces at work, but it is worth pausing and reflecting about the implications of this linguistic bias. Concerns aside, there are no easy solutions to the hegemony of English in the global higher education media world. For example, while there is no European higher education media ‘voice’ (see ‘Where is Europe’s higher education media?‘), if one were to emerge could it realistically function in any other language than English given the diversity of languages used in the 47 member country systems making up the European Higher Education Area?

Second, these outlets, as well as many others I have not mentioned, are all grappling with the description versus analysis tension, and the causal forces versus outcomes focus tension. Light and breezy stories may capture initial interest, but in the end the forces shaping the outcomes need to be unpacked and deliberated about.

Third, the diversification strategies that these media outlets have considered, and selectively adopted, can generate potential conflicts of interest. I have a difficult time, for example, reading Washington Post-based stories about the for-profit higher education sector knowing that this newspaper is literally kept afloat by Kaplan, a major for-profit higher education firm. And insights and effort aside, can THE journalists and editors write about their own rankings, or other competitive ranking initiatives (e.g., see ‘’Serious defects’ apparent in ‘crude’ European rankings project’), with the necessary distance needed to be analytical versus boosterish? I’ll leave the ‘necessary distance’ question for others to reflect about, and assume that this is a question that the skilled professionals representing the Washington Post and the THE must be grappling with.

Finally, is it possible to provide The World View, be WorldWise, or do justice to the ‘global’, in a weblog or any media outlet? I doubt it, for we are all situated observers of the unfolding of the global higher education landscape. There is no satellite platform that is possible to stand upon, and we are all (journalists, bloggers, pundits, academics, etc.) grappling with how to make sense of the denationalizing systems we know best, not to mention the emerging systems of regional and global governance that are being constructed.

All that can be done, perhaps, is to enhance analytical capabilities, encourage the emergence of new voices, and go for it while being open and transparent about biases and agendas, blind spots and limitations.

Kris Olds

Note: my sincere thanks to the editors of the Chronicle of Higher Education, Inside Higher Ed, Times Higher Education, and University World News, for passing on their many insights via telephone and email correspondence.  And thanks to my colleagues Yi-Fu Tuan and Mary Churchill for their indirectly inspirational comments about World views this past week. Needless to say, the views expressed above are mine alone.

Bibliometrics, global rankings, and transparency

Why do we care so much about the actual and potential uses of bibliometrics (“the generic term for data about publications,” according to the OECD), and world university ranking methodologies, but care so little about the private sector firms, and their inter-firm relations, that drive the bibliometrics/global rankings agenda forward?

This question came to mind when I was reading the 17 June 2010 issue of Nature magazine, which includes a detailed assessment of various aspects of bibliometrics, including the value of “science metrics” to assess aspects of the impact of research output (e.g., publications) as well as “individual scientific achievement”.

The Nature special issue, especially Richard Van Noorden’s survey on the “rapidly evolving ecosystem” of [biblio]metrics, is well worth a read. Even though bibliometrics can be a problematic and fraught dimension of academic life, they are rapidly becoming an accepted dimension of the governance (broadly defined) of higher education and research. Bibliometrics are generating a diverse and increasingly deep impact regarding the governance process at a range of scales, from the individual (a key focus of the Nature special issue) through to the unit/department, the university, the discipline/field, the national, the regional, and the global.

Now while the development process of this “eco-system” is rapidly changing, and a plethora of innovations are occurring regarding how different disciplines/fields should or should not utilize bibliometrics to better understand the nature and impact of knowledge production and dissemination, it is interesting to stand back and think about the non-state actors producing, for profit, this form of technology that meshes remarkably well with our contemporary audit culture.

In today’s entry, I’ve got two main points to make, before concluding with some questions to consider.

First, it seems to me that there is a disproportionate amount of research being conducted on the uses and abuses of metrics in contrast to research on who the producers of these metrics are, how these firms and their inter-firm relations operate, and how they attempt to influence the nature of academic practice around the world.

Now, I am not seeking to imply that firms such as Elsevier (producer of Scopus), Thomson Reuters (producer of the ISI Web of Knowledge), and Google (producer of Google Scholar), are necessarily generating negative impacts (see, for example, ‘Regional content expansion in Web of Science®: opening borders to exploration’, a good news news story from Thomson Reuters that we happily sought out), but I want to make the point that there is a glaring disjuncture between the volume of research conducted on bibliometrics versus research on these firms (the bibliometricians), and how these technologies are brought to life and to market. For example, a search of Thomson Reuter’s ISI Web of Knowledge for terms like Scopus, Thomson Reuters, Web of Science and bibliometrics generates a nearly endless list of articles comparing the main data bases, the innovations associated with them, and so on, but amazingly little research on Elsevier or Thomson Reuters (i.e. the firms).  From thick to thin, indeed, and somewhat analogous to the lack of substantial research available on ratings agencies such as Moody’s or Standard and Poor’s.

Second, and on a related note, the role of firms such as Elsevier and Thomson Reuters, not to mention QS Quacquarelli Symonds Ltd, and TSL Education Ltd, in fueling the global rankings phenomenon has received remarkably little attention in contrast to vigorous debates about methodologies. For example, the four main global ranking schemes, past and present:

all draw from the databases provided by Thomson Reuters and Elsevier.

One of the interesting aspects of the involvement of these firms with the rankings phenomenon is that they have helped to create a normalized expectation that rankings happen once per year, even though there is no clear (and certainly not stated) logic for such a frequency. Why not every 3-4 years, for example, perhaps in alignment with the World Cup or the Olympics? I can understand why rankings have to happen more frequently than the US’ long-delayed National Research Council (NRC) scheme, and they certainly need to happen more frequently than the years France wins the World Cup championship title (sorry…) but why rank every single year?

But, let’s think about this issue with the firms in mind versus the pros and cons of the methodologies in mind.

From a firm perspective, the annual cycle arguably needs to become normalized for it is a mechanism to extract freely provided data out of universities. This data is clearly used to rank but is also used to feed into the development of ancillary services and benchmarking capabilities that can be sold back to universities, funding councils, foundations, regional organizations (e.g., the European Commission which is intensely involved in benchmarking and now bankrolling a European ranking scheme), and the like.

QS Quacquarelli Symonds Ltd, for example, was marketing such services (see an extract, above, from a brochure) at their stand at the recent NAFSA conference in Kansas City, while Thomson Reuters has been busy developing what they deem the Global Institutional Profiles Project. This latter project is being spearheaded by Jonathon Adams, a former Leeds University staff member who established a private firm (Evidence Ltd) in the early 1990s that rode the UK’s Research Assessment Excellence (RAE) and European ERA waves before being acquired by Thomson Reuters in January 2009.

Sophisticated on-line data entry portals (see a screen grab of one above) are also being created. These portals build a free-flow (at least one one-way) pipeline between the administrative offices of hundreds of universities around the world and the firms doing the ranking.

Data demands are becoming very resource consuming for universities. For example, the QS template currently being dealt with by universities around the world shows 14 main categories with sub-categories for each: all together there are 60 data fields, of which 10 are critical to the QS ranking exercise, to be launched in October 2010. Path dependency dynamics clearly exist for once the pipelines are laid the complexity of data requests can be gradually ramped up.

A key objective, then, seems to involve using annual global rankings to update fee-generating databases, not to mention boost intra-firm knowledge bases and capabilities (for consultancies), all operational at the global scale.

In closing, is the posited disjuncture between research on bibliometrics vs research on bibliometricians and the information service firms these units are embedded within worth noting and doing something about?

Second, what is the rationale for annual rankings versus a more measured rankings window, in a temporal sense? Indeed why not synchronize all global rankings to specific years (e.g., 2010, 2014, 2018) so as to reduce strains on universities vis a vis the provision of data, and enable timely comparisons between competing schemes. A more measured pace would arguably reflect the actual pace of change within our higher education institutions versus the needs of these private firms.

And third, are firms like Thomson Reuters and Elsevier, as well as their partners (esp., QS Quacquarelli Symonds Ltd and TSL Education Ltd), being as transparent as they should be about the nature of their operations? Perhaps it would be useful to have accessible disclosures/discussions about:

  • What happens with all of the data that universities freely provide?
  • What is stipulated in the contracts between teams of rankers (e.g., Times Higher Education and Thomson Reuters)?
  • What rights do universities have regarding the open examination and use of all of the data and associated analyses created on the basis of the data universities originally provided?
  • Who should be governing, or at least observing, the relationship between these firms and the world’s universities? Is this relationship best continued on a bilateral firm to university basis? Or is the current approach inadequate? If it is perceived to be inadequate, should other types of actors be brought into the picture at the national scale (e.g., the US Department of Education or national associations of universities), the regional-scale (e.g., the European University Association), and/or the global scale (e.g., the International Association of Universities)?

In short, is it not time that the transparency agenda the world’s universities are being subjected to also be applied to the private sector firms that are driving the bibliometrics/global rankings agenda forward?

Kris Olds

Developments in world institutional rankings; SCImago joins the club

Editor’s note: this guest entry was kindly written by Gavin Moodie, principal policy adviser of Griffith University in Australia.  Gavin (pictured to the right) is most interested in the relations between vocational and higher education. His book From Vocational to Higher Education: An International Perspective was published by McGraw-Hill last year. Gavin’s entry sheds light on a new ranking initiative that needs to be situated within the broad wave of contemporary rankings – and bibliometrics more generally – that are being used to analyze, legitimize, critique, promote, not to mention extract revenue from.  Our thanks to Gavin for the illuminating contribution below.

~~~~~~~~~~~~~~~~~~~~~~~~

It has been a busy time for world institutional rankings watchers recently. Shanghai Jiao Tong University’s Institute of Higher Education published its academic ranking of world universities (ARWU) for 2009. The institute’s 2009 rankings include its by now familiar ranking of 500 institutions’ overall performance and the top 100 institutions in each of five broad fields: natural sciences and mathematics, engineering/technology and computer sciences, life and agriculture sciences, clinical medicine and pharmacy, and social sciences. This year Dr. Liu and his colleagues have added rankings of the top 100 institutions in each of five subjects: mathematics, physics, chemistry, computer science and economics/business.

Times Higher Education announced that over the next few months it will develop a new method for its world university rankings which in future will be produced with Thomson Reuters. Thomson Reuters’ contribution will be guided by Jonathan Adams (Adams’ firm, Evidence Ltd, was recently acquired by Thomson Reuters).

And a new ranking has been published, SCImago institutions rankings: 2009 world report. This is a league table of research institutions by various factors derived from Scopus, the database of the huge multinational publisher Elsevier. SCImago’s institutional research rank is distinctive in including with higher education institutions government research organisations such as France’s Centre National de la Recherche Scientifique, health organisations such as hospitals, and private and other organisations. Only higher education institutions are considered here. The ranking was produced by the SCImago Research Group, a Spain-based research network “dedicated to information analysis, representation and retrieval by means of visualisation techniques”.

SCImago’s rank is very useful in not cutting off at the top 200 or 500 universities, but in including all organisations with more than 100 publications indexed in Scopus in 2007. It therefore includes 1,527 higher education institutions in 83 countries. But even so, it is highly selective, including only 16% of the world’s estimated 9,760 universities, 76% of US doctoral granting universities, 65% of UK universities and 45% of Canada’s universities. In contrast all of New Zealand’s universities and 92% of Australia’s universities are listed in SCImago’s rank. Some 38 countries have seven or more universities in the rank.

SCImago derives five measures from the Scopus database: total outputs, cites per document (which are heavily influenced by field of research as well as research quality), international collaboration, normalised Scimago journal rank and normalised citations per output. This discussion will concentrate on total outputs and normalised citations per output.

Together these measures show that countries have been following two broad paths to supporting their research universities. One group of countries in northern continental Europe around Germany have supported a reasonably even development of their research universities, while another group of countries influenced by the UK and the US have developed their research universities much more unevenly. Both seem to be successful in support research volume and quality, at least as measured by publications and citations.

Volume of publications

Because a reasonable number of countries have several higher education institutions listed in SCImago’s rank it is possible to consider countries’ performance rather than concentrate on individual institutions as the smaller ranks encourage. I do this by taking the average of the performance of each country’s universities. The first measure of interest is the number of publications each university has indexed in Scopus over the five years from 2003 to 2007, which is an indicator of the volume of research. The graph in figure 1 shows the mean number of outputs for each country’s higher education research institutions. It shows only countries which have more than six universities included in SCImago’s rank, which leaves out 44 countries and thus much of the tail in institutions’ performance.

Figure 1: mean of universities’ outputs for each country with > 6 universities ranked


These data are given in table 1. The first column gives the number of higher education institutions each country has ranked in SCImago institutions rankings (SIR): 2009 world report. The second column shows the mean number of outputs indexed in Scopus for each country’s higher education research institutions from 2003 to 2007. The next column shows the standard deviation of the number of outputs for each country’s research university.

The third column in table 1 shows the coefficient of variation, which is the standard deviation divided by the mean and multiplied by 100. This is a measure of the evenness of the distribution of outputs amongst each country’s universities. Thus, the five countries whose universities had the highest average number of outputs indexed in Scopus from 2003 to 2007 – the Netherlands, Israel, Belgium, Denmark and Sweden – also had a reasonably low coefficient of variation below 80. This indicates that research volume is spread reasonably evenly amongst those countries’ universities. In contrast, Canada which had the sixth highest average number of outputs also has a reasonably high coefficient of variation of 120, indicating an uneven distribution of outputs amongst Canada’s research universities.

The final column in table 1 shows the mean of SCImago’s international collaboration score, which is a score of the proportions of the institution’s outputs jointly authored with someone from another country. The US’ international collaboration is rather low because US authors collaborate more often with authors in other institutions within the country.

Table 1: countries with > 6 institutions ranked by institutions’ mean outputs, 2007

Source: SCImago Research Group (2009) SCImago institutions rankings (SIR): 2009 world report.

Citations per paper by field

We next examine citations per paper by field of research, which is an indicator of the quality of research. This is the ratio between the average citations per publication of an institution and the world number of citations per publication over the same time frame and subject area. SCImago says it computed this ratio using the method established by Sweden’s Karolinska Intitutet which it called the ‘Item oriented field normalized citation score average’. A score of 0.8 means the institution is cited 20% below average and 1.3 means the institution is cited 30% above average.

Figure 2 shows mean normalised citations per paper for each country’s higher education research institutions from 2003 to 2007, again showing only countries which have more than six universities included in SCImago’s rank. The graph for an indicator of research quality in figure 2 is similar in shape to the graph of research volume in figure 1.

Figure 2: mean of universities’ normalised citations per paper for each country with > 6 universities ranked

Table 2 shows countries with more than six higher education research institutions ranked by their institutions’ mean normalised citations. This measure distinguishes more sharply between institutions than volume of outputs – the coefficient of variations for countries’ mean institutions normalised citations are higher than for number of publications. Nonetheless, several countries with high mean normalised citations have an even performance amongst their universities on this measure – Switzerland, Netherlands, Sweden, Germany, Austria, France, Finland and New Zealand.

Finally, I wondered whether countries which had a reasonably even performance of their research universities by volume and quality of publications reflected a more equal society. To test this I obtained from the Central Intelligence Agency’s (2009) World Factbook the Gini index of the distribution of family income within a country. A country with a Gini index of 0 would have perfect equality in the distribution of family income whereas a country with perfect inequality in its distribution of family would have a Gini index of 100. There is a modest correlation of 0.37 between a country’s Gini index and its coefficient of variation for both publications and citations.

Table 2: countries with > 6 institutions ranked by institutions’ normalised citations per output

Sources: SCImago Research Group (2009) SCImago institutions rankings (SIR): 2009 world report; Central Intelligence Agency (2009) The world factbook.

Conclusion

SCImago’s institutions research rank is sufficiently comprehensive to support comparisons between countries’ research higher education institutions. It finds two patterns amongst countries whose research universities have a high average volume and quality of research publications. One group of countries has a fairly even performance of their research universities, presumably because they have had fairly even levels of government support. This group is in northern continental Europe and includes Switzerland, Germany, Sweden, the Netherlands, Austria, Denmark and Finland. The other group of countries also has a high average volume and quality of research publications, but spread much more unevenly between universities. This group includes the US, the UK and Canada.

This finding is influenced by the measure I chose to examine countries’ performance, the average of their research universities’ performance. Other results may have been found using another measure of countries’ performance, such as the number of universities a country has in the top 100 or 500 of research universities normalised by gross domestic product. But such a measure would not reflect a country’s overall performance of their research universities, but only the performance of its champions. Whether one is interested in a country’s overall performance or just the performance of its champions depends on whether one believes more benefit is gained from a few outstanding performers or several excellent performers. That would usefully be the subject of another study.

Gavin Moodie

References

Central Intelligence Agency (2009) The world factbook (accessed 29 October 2009).

SCImago institutions rankings (SIR): 2009 world report (revised edition accessed 20 October 2009).

From rhetoric to reality: unpacking the numbers and practices of global higher ed

ihepnov2009Numbers, partnerships, linkages, and collaboration: some key terms that seem to be bubbling up all over the place right now.

On the numbers front, the ever active Cliff Adelman released, via the Institute for Higher Education Policy (IHEP), a new report titled The Spaces Between Numbers: Getting International Data on Higher Education Straight (November 2009). As the IHEP press release notes:

The research report, The Spaces Between Numbers: Getting International Data on Higher Education Straight, reveals that U.S. graduation rates remain comparable to those of other developed countries despite news stories about our nation losing its global competitiveness because of slipping college graduation rates. The only major difference—the data most commonly highlighted, but rarely understood—is the categorization of graduation rate data. The United States measures its attainment rates by “institution” while other developed nations measure their graduation rates by “system.”

The main target audience of this new report seems to be the OECD, though we (as users) of international higher ed data can all benefit from a good dig through the report. Adelman’s core objective is facilitating the creation of a new generation of indicators, indicators that are a lot more meaningful and policy-relevant than those that currently exist.

Second, Universities UK (UUK) released a data-laden report titled The impact of universities on the UK economy. As the press release notes:

Universities in the UK now generate £59 billion for the UK economy putting the higher education sector ahead of the agricultural, advertising, pharmaceutical and postal industries, according to new figures published today.

This is the key finding of Universities UK’s latest UK-wide study of the impact of the higher education sector on the UK economy. The report – produced for Universities UK by the University of Strathclyde – updates earlier studies published in 1997, 2002 and 2006 and confirms the growing economic importance of the sector.

The study found that, in 2007/08:

  • The higher education sector spent some £19.5 billion on goods and services produced in the UK.
  • Through both direct and secondary or multiplier effects this generated over £59 billion of output and over 668,500 full time equivalent jobs throughout the economy. The equivalent figure four years ago was nearly £45 billion (25% increase).
  • The total revenue earned by universities amounted to £23.4 billion (compared with £16.87 billion in 2003/04).
  • Gross export earnings for the higher education sector were estimated to be over £5.3 billion.
  • The personal off-campus expenditure of international students and visitors amounted to £2.3 billion.

Professor Steve Smith, President of Universities UK, said: “These figures show that the higher education sector is one of the UK’s most valuable industries. Our universities are unquestionably an outstanding success story for the economy.

See pp 16-17 regarding a brief discussion of the impact of international student flows into the UK system.

These two reports are interesting examples of contributions to the debate about the meaning and significance of higher education vis a vis relative growth and decline at a global scale, and the value of a key (ostensibly under-recognized) sector of the national (in this case UK) economy.

And third, numbers, viewed from the perspectives of pattern and trend identification, were amply evident in a new Thomson Reuters’ report (CHINA: Research and Collaboration in the New Geography of Science) co-authored by the data base crunchers from Evidence Ltd., a Leeds-based firm and recent Thomson Reuters acquisition. One valuable aspect of this report is that it unpacks the broad trends, and flags key disciplinary and institutional geographies to China’s new geography of science. As someone who worked at the National University of Singapore (NUS) for four years, I can understand why NUS is now China’s No.1 institutional collaborator (see p. 9), though the why issues are not discussed in this type of broad mapping cum PR report for Evidence & Thomson Reuters.

Table4

Shifting tack, two new releases about international double and joint degrees — one (The Graduate International Collaborations Project: A North American Perspective on Joint and Dual Degree Programs) by the North American Council of Graduate Schools (CGS), and one (Joint and Double Degree Programs: An Emerging Model for Transatlantic Exchange) by the International Institute for Education (IIE) and the Freie Universität Berlin — remind us of the emerging desire to craft more focused, intense and ‘deep’ relations between universities versus the current approach which amounts to the promiscuous acquisition of hundreds if not thousands of memoranda of understanding (MoUs).

IIEFUBcoverThe IIE/Freie Universität Berlin book (link here for the table of contents) addresses various aspects of this development process:

The book seeks to provide practical recommendations on key challenges, such as communications, sustainability, curriculum design, and student recruitment. Articles are divided into six thematic sections that assess the development of collaborative degree programs from beginning to end. While the first two sections focus on the theories underpinning transatlantic degree programs and how to secure institutional support and buy-in, the third and fourth sections present perspectives on the beginning stages of a joint or double degree program and the issue of program sustainability. The last two sections focus on profiles of specific transatlantic degree programs and lessons learned from joint and double degree programs in the European context.

It is clear that international joint and double degrees are becoming a genuine phenomenon; so much so that key institutions including the IIE, the CGS, and the EU are all paying close attention to the degrees’ uses, abuses, and efficacy. Thus we should view this new book as an attempt to both promote, but in a manner that examines the many forces that shape the collaborative process across space and between institutions. International partnerships are not simple to create, yet they are being demanded by more and more stakeholders.  Why?  Dissatisfaction that the rhetoric of ‘internationalization’ does not match up to the reality, and there is a ‘deliverables’ problem.

Indeed, we hosted some senior Chinese university officials here in Madison several months ago and they used the term “ghost MoUs”, reflecting their dissatisfaction with filling filing cabinet after filing cabinet with signed MoUs that lead to absolutely nothing. In contrast, engagement via joint and double degrees, for example, or other forms of partnership (e.g., see International partnerships: a legal guide for universities), cannot help but deepen the level of connection between institutions of higher education on a number of levels. It is easy to ignore a MoU, but not so easy to ignore a bilateral scheme with clearly defined deliverables, a timetable for assessment, and a budget.

AlQudsBrandeisThe value of tangible forms of international collaboration was certainly on view when I visited Brandeis University earlier this week.  Brandeis’ partnership with Al-Quds University (in Jerusalem) links “an Arab institution in Jerusalem and a Jewish-sponsored institution in the United States in an exchange designed to foster cultural understanding and provide educational opportunities for students, faculty and staff.”  Projects undertaken via the partnership have included administrative exchanges, academic exchanges, teaching and learning projects, and partnership documentation (an important but often forgotten about activity). The level of commitment to the partnership at Brandeis was genuinely impressive.

In the end, as debates about numbers, rankings, partnerships, MoUs — internationalization more generally — show us, it is only when we start grinding through the details and ‘working at the coal face’ (like Brandeis and Al-Quds seem to be doing), though in a strategic way, can we really shift from rhetoric to reality.

Kris Olds

Are we witnessing a key moment in the reworking of the global higher education & research landscape?

ACEissuebriefOver the last several weeks more questions about the changing nature of the relative position of national higher education and research systems have emerged.  These questions have often been framed around the notion that the US higher education system (assuming there is one system) might be in relative decline, that flagship UK universities (national champions?) like Oxford are unable to face challenges given the constraints facing them, and that universities from ’emerging’ regions (East and South Asia, in particular) are ‘rising’ due to the impact of continual or increasing investment in higher education and research.

Select examples of such contributions include this series in the Chronicle of Higher Education:

and these articles associated with the much debated THE-QS World University Rankings 2009:

EvidenceUKcoverThe above articles and graphics in US and UK higher education media outlets were preceded by this working paper:

a US report titled:

and one UK report titled:

There are, of course, many other calls for increased awareness, or deep and critical reflection.  For example, back in June 2009, four congressional leaders in the USA:

asked the National Academies to form a distinguished panel to assess the competitive position of the nation’s research universities. “America’s research universities are admired throughout the world, and they have contributed immeasurably to our social and economic well-being,” the Members of Congress said in a letter delivered today. “We are concerned that they are at risk.”….

The bipartisan congressional group asked that the Academies’ panel answer the following question: “What are the top ten actions that Congress, state governments, research universities, and others could take to assure the ability of the American research university to maintain the excellence in research and doctoral education needed to help the United States compete, prosper, and achieve national goals for health, energy, the environment, and security in the global community of the 21st century?”

Recall that the US National Academies produced a key 2005 report (Rising Above the Gathering Storm) “which in turn was the basis for the “America COMPETES Act.” This Act created a blueprint for doubling funding for basic research, improving the teaching of math and science, and taking other steps to make the U.S. more competitive.” On this note see our 16 June 2008 entry titled ‘Surveying US dominance in science and technology for the Secretary of Defense‘.

RisingStormTaken together, these contributions are but a sample of the many expressions of concern being expressed in 2009 in the Global North (especially the US & UK) about the changing geography of the global higher education and research landscape.

These types of articles and reports shed light, but can also raise anxiety levels (as they are sometimes designed to do).  The better of them attempt to ensure that the angsts being felt in the long dominant Global North are viewed with a critical eye, and that people realize that this is not a “zero-sum game” (as Philip Altbach puts it in the Chronicle’sAmerica Falling: Longtime Dominance in Education Erodes‘). For example, the shifting terrain of global research productivity is partially a product of increasing volumes of collaboration and human mobility across borders, while key global challenges are just that – global in nature and impossible to attend to unless global teams of relatively equitable capacities are put together. Moreover, greater transnational education and research activity and experience arguably facilitates a critical disposition towards the most alarmist material, while concurrently reinforcing the point that the world is changing, albeit very unevenly, and that there are also many positive changes associated with a more dispersed higher education and research landscape.

We’ll do our best to post links to new global mappings like these as they emerge in the future.  Please ensure you let us know what is being published, be it rigorous, critical, analytical, alarmist, self-congratulatory, etc., and we’ll profile it on GlobalHigherEd.  The production of discourses on this new global higher education and research landscape is a key component of the process of change itself.  Thus we need to be concerned not just with the content of such mappings, but also the logics underlying the production of such mappings, and the institutional relations that bring such mappings into view for consumption.

Kris Olds