On the illogics of the Times Higher Education World Reputation Rankings (2013)

Note: you can link here for the Inside Higher Ed version of the same entry.

~~~~~~~~

Amidst all the hype and media coverage related to the just released Times Higher Education World Reputation Rankings (2013), it’s worth reflecting on just how small of a proportion of the world’s universities are captured in this exercise (see below). As I noted last November, the term ‘world university rankings’ does not reflect the reality of the exercise the rankers are engaged in; they only focus on a minuscule corner of the institutional ecosystem of the world’s universities.

The firms associated with rankings have normalized the temporal cycle of rankings despite this being an illogical exercise (unless you are interested in selling advertising space in a magazine and on a website).  As Alex Usher pointed out earlier today in ‘The Paradox of University Rankings‘ (and I quote in full):

By the time you read this, the Times Higher Education’s annual Reputation Rankings will be out, and will be the subject of much discussion on Twitter and the Interwebs and such.  Much as I enjoy most of what Phil Baty and the THE do, I find the hype around these rankings pretty tedious.

Though they are not an unalloyed good, rankings have their benefits.  They allow people to compare the inputs, outputs, and (if you’re lucky) processes and outcomes at various institutions.  Really good rankings – such as, for instance, the ones put out by CHE in Germany – even disaggregate data down to the departmental level so you can make actual apples-to-apples  comparisons by institution.

But to the extent that rankings are capturing “real” phenomena, is it realistic to think that they change every year?  Take the Academic Ranking of World Universities (ARWU), produced annually by Shanghai Jiao Tong University (full disclosure: I sit on the ARWU’s advisory board).   Those rankings, which eschew any kind of reputational surveys, and look purely at various scholarly outputs and prizes, barely move at all.  If memory serves, in the ten years since it launched, the top 50 has only had 52 institutions, and movement within the 50 has been minimal.  This is about right: changes in relative position among truly elite universities can take decades, if not centuries.

On the other hand, if you look at the Times World Reputation Rankings (found here), you’ll see that, in fact, only the position of the top 6 or so is genuinely secure.  Below about tenth position, everyone else is packed so closely together that changes in rank order are basically guaranteed, especially if the geographic origin of the survey sample were to change somewhat.  How, for instance, did UCLA move from 12th in the world to 9th overall in the THE rankings between 2011 and 2012 at the exact moment the California legislature was slashing its budget to ribbons?  Was it because of extraordinary new efforts by its faculty, or was it just a quirk of the survey sample?  And if it’s the latter, why should anyone pay attention to this ranking?

This is the paradox of rankings: the more important the thing you’re measuring, the less useful it is to measure it on an annual basis.  A reputation ranking done every five years might, over time, track some significant and meaningful changes in the global academic pecking order.  In an annual ranking, however, most changes are going to be the result of very small fluctuations or methodological quirks.  News coverage driven by those kinds of things is going to be inherently trivial.

Top100WUR2013

The real issues to ponder are not relative placement in the ranking and how the position of universities has changed, but instead why this ranking was created in the first place and whose interests it serves.

Kris Olds

World University Rankings — Time for a Name Change?

I’ve often wondered if the term ‘World University Rankings’ — the one deployed by the firm QS in its QS World University Rankings®, or TSL Education Ltd along with Thomson Reuters, in their Times Higher Education World University Rankings, is an accurate and indeed ethical one to use.

My concern over the term was heightened during visit to Jamaica last week where I attended the Association of Commonwealth Universities (ACU) Conference of Executive Heads. I was invited by the ACU, the world’s oldest international consortia with 500+ member institutions in 37 countries, to engage in a debate about rankings with Ms. Zia Batool (Director General, Quality Assurance and Statistics, Higher Education Commission, Pakistan) and Mr. Phil Baty (Editor, Times Higher Education). Link here for a copy of the conference agenda. The event was very well organized, and Professor Nigel Harris, Chair of the ACU Council and Vice Chancellor of the University of the West Indies, was a wonderful host.

My concern about the term ‘World University Rankings’ relates to the very small number of universities that are ranked relative to the total number of universities around the world that have combined research and teaching mandates. World University Rankings is a term that implies there is a unified field of universities that can be legitimately compared and ranked in an ordinal hierarchical fashion on the basis of some common metrics.

The words ‘World’ + ‘University’ implies that all of the universities scattered across the world are up for consideration, and that they can and will be ranked. And as the online Merriam-Webster Dictionary defines the word, ‘rank’ means:

2 a :relative standing or position
2 b : a degree or position of dignity, eminence, or excellence : distinction <soon took rank as a leading attorney — J. D. Hicks>

2 c : high social position <the privileges of rank>
2 d : a grade of official standing in a hierarchy
3: an orderly arrangement : formation
4 : an aggregate of individuals classed together —usually used in plural
5 : the order according to some statistical characteristic (as the score on a test)

Even more than the term ‘World Class Cities,’ the term World University Rankings is inclusive in symbolism, implying that any student, staff or faculty member from any university in any continent could exam these rankings online, or in the glossy magazine we received via Times Higher Education, and cross one’s fingers that ‘my’ or ‘our’ university might be in the Top 200 or Top 400. But look at the chances.

Alas, the vast majority of the world’s faculty, students and staff feel quickly depressed, dejected, unhappy, and sometimes concerned, when World University Ranking outcomes are examined. Students ask university leaders “what’s wrong with our university? Why are we not in the world university rankings?” Expectations spurred on by the term are dashed year after year. This might not be such a problem were it not for the fact that politicians and government officials in ministries of higher education, or indeed in prime ministerial offices, frequently react the same way.

But should they be feeling like they were considered and then rejected? No.

First, there are vast structural differences in the world of higher education related to scales of material resources, human resources (e.g., No. 1 Caltech’s student-faculty ratio is 3-1!), access to the world’s information and knowledge banks (e.g., via library data bases), missions (including the mandate to serve the local region, build nations, serve the poor, present minimal access access hurdles), etc. Language matters too for there is an undeniable global political and cultural economy to the world’s publication outlets (see ‘Visualizing the uneven geographies of knowledge production and circulation‘). These structural differences exist and cannot be wished away or ignored.

Second, it is worth reminding consumers of World University Rankers that these analytical devices are being produced by private sector firms based in cities like London whose core mission is to monetize the data they acquire (via universities themselves for free, as well as other sources) so as to generate a profit. Is profit trumping ethics? Do they really believe it is appropriate to use a term that implies a unified field of universities can be legitimately compared and ranked in an ordinal hierarchical fashion?

Is there an alternative term to World University Rankings that would better reflect the realities of the very uneven global landscape of higher education and research? Rankings and benchmarking are here to stay, but surely there must be a better way of representing what it really going on than implying everyone was considered, and 96-98% rejected. And let’s not pretend a discussion of methodology via footnotes, or a few methods-oriented articles in the rankings special issue, gets the point across.

The rankers out there owe it to the world’s universities (made up of millions of committed and sincere students, faculty, and staff) to convey who is really in the field of comparison. The term World University Rankings should be reconsidered, and a more accurate alternative should be utilized: this is one way corporate social responsibility is practiced in the digital age.

Kris Olds

Measuring Academic Research in Canada: Field-Normalized Academic Rankings 2012

Greetings from Chicago where I’m about to start a meeting at the Federal Reserve Bank of Chicago on Mobilizing Higher Education to Support Regional Innovation and A Knowledge-Driven Economy. The main objective of meeting here is to explore a possible higher education focused-follow-up to the OECD’s Territorial Review: the Chicago Tri-State Metro Area. I’ll develop an entry about this fascinating topic in the new future.

Before I head out to get my wake-up coffee, though, I wanted to alert you to a ‘hot-off the press’ report by Toronto-based Higher Education Strategy Associates (HESA). The report can be downloaded here in PDF format, and I’ve pasted in the quasi-press release below which just arrived in my email InBox.

More food for fodder on the rankings debate, and sure to interest Canadian higher ed & research people, not to mention their international partners (current & prospective).

Kris Olds

>>>>>>

Research Rankings

August 28, 2012
Alex Usher

Today, we at HESA are releasing our brand new Canadian Research Rankings. We’re pretty proud of what we’ve accomplished here, so let me tell you a bit about them.

Unlike previous Canadian research rankings conducted by Research InfoSource, these aren’t simply about raw money and publication totals. As we’ve already seen, those measures tend to privilege strength in some disciplines (the high-citation, high-cost ones) more than others. Institutions which are good in low-citation, low-cost disciplines simply never get recognized in these schemes.

Our rankings get around this problem by field-normalizing all results by discipline. We measure institutions’ current research strength through granting council award data, and we measure the depth of their academic capital (“deposits of erudition,” if you will) through use of the H-index, (which, if you’ll recall, we used back in the spring to look at top academic disciplines). In both cases, we determine the national average of grants and H-indexes in every discipline, and then adjust each individual researcher’s and department’s scores to be a function of that average.

(Well, not quite all disciplines. We don’t do medicine because it’s sometimes awfully hard to tell who is staff and who is not, given the blurry lines between universities and hospitals.)

Our methods help to correct some of the field biases of normal research rankings. But to make things even less biased, we separate out performance in SSHRC-funded disciplines and NSERC-funded disciplines, so as to better examine strengths and weaknesses in each of these areas. But, it turns out, strength in one is substantially correlated with strength in the other. In fact, the top university in both areas is the same: the University of British Columbia (a round of applause, if you please).

I hope you’ll read the full report, but just to give you a taste, here’s our top ten for SSHRC and NSERC disciplines.

Eyebrows furrowed because of Rimouski? Get over your preconceptions that research strength is a function of size. Though that’s usually the case, small institutions with high average faculty productivity can occasionally look pretty good as well.

More tomorrow.

Towards a Global Common Data Set for World University Rankers

Last week marked another burst of developments in the world university rankings sector, including two ‘under 50’ rankings. More specifically:

A coincidence? Very unlikely. But who was first with the idea, and why would the other ranker time their release so closely? We don’t know for sure, but we suspect the originator of the idea was Times Higher Education (with Thomson Reuters) as their outcome was formally released second. Moreover, the data analysis phase for the production of the THE 100 Under 50 was apparently “recalibrated” whereas the QS data and methodology was the same as their regular rankings – it just sliced the data different way. But you never know, for sure, especially given Times Higher Education‘s unceremonious dumping of QS for Thomson Reuters back in 2009.

Speaking of competition and cleavages in the world university rankings world, it is noteworthy that India’s University Grants Commission announced, on the weekend, that:

Foreign universities entering into agreement with their Indian counterparts for offering twinning programmes will have to be among the global top 500.

The Indian varsities on the other hand, should have received the highest accreditation grade, according to the new set of guidelines approved by University Grants Commission today.

“The underlining objective is to ensure that only quality institutes are permitted for offering the twinning programmes to protect the interest of the students,” a source said after a meeting which cleared the regulations on twinning programmes.

They said foreign varsities entering into tie-ups with Indian partners should be ranked among the top 500 by the Times Higher Education World University Ranking or by Shanghai Jiaotong University of the top 500 universities [now deemed the Academic Ranking of World Universities].

Why does this matter? We’d argue that it is another sign of the multi-sited institutionalization of world university rankings. And institutionalization generates path dependency and normalization. When more closely tied to the logic of capital, it also generates uneven development meaning that there are always winners and losers in the process of institutionalizing a sector. In this case the world’s second most populous country, with a fast growing higher education system, will be utilizing these rankings to mediate which universities (and countries) linkages can be formed with.

Now, there are obvious pros and cons to the decision made by India’s University Grants Commission, including reducing the likelihood that ‘fly-by-night’ operations and foreign for-profits will be able to link up with Indian higher education institutions when offering international collaborative degrees. This said, the establishment of such guidelines does not necessarily mean they will be implemented. But this news item from India, related news from Denmark and the Netherlands regarding the uses of rankings to guide elements of immigration policy (see ‘What if I graduated from Amherst or ENS de Lyon…; ‘DENMARK: Linking immigration to university rankings‘), as well as the emergence of the ‘under 50’ rankings, are worth reflecting on a little more. Here are two questions we’d like to leave you with.

First, does the institutionalization of world university rankings increase the obligations of governments to analyze the nature of the rankers? As in the case of ratings agencies, we would argue more needs to be known about the rankers, including their staffing, their detailed methodologies, their strategies (including with respect to monetization), their relations with universities and government agencies, potential conflicts of interest, so on. To be sure, there are some very conscientious people working on the production and marketing of world university rankings, but these are individuals, and it is important to set the rules of the game up so that a fair and transparent system exists. After all, world university rankers contribute to the generation of outcomes yet do not have to experience the consequences of said outcomes.

Second, if government agencies are going to use such rankings to enable or inhibit international linkage formation processes, not to mention direct funding, or encourage mergers, or redefine strategy, then who should be the manager of the data that is collected? Should it solely be the rankers? We would argue that the stakes are now too high to leave the control of the data solely in the hands of the rankers, especially given that much of it is provided for free by higher education institutions in the first place. But if not these private authorities, then who else? Or, if not who else, then what else?

While we were drafting this entry on Monday morning a weblog entry by Alex Usher (of Canada’s Higher Education Strategy Associates) coincidentally generated a ‘pingback’ to an earlier entry titled ‘The Business Side of World University Rankings.’ Alex Usher’s entry (pasted in below, in full) raises an interesting question that is worth of careful consideration not just because of the idea of how the data could be more fairly stored and managed, but also because of his suggestions regarding the process to push this idea forward:

My colleague Kris Olds recently had an interesting point about the business model behind the Times Higher Education’s (THE) world university rankings. Since 2009 data collection for the rankings has been done by Thomson Reuters. This data comes from three sources. One is bibliometric analysis, which Thomson can do on the cheap because it owns the Web of Science database. The second is a reputational survey of academics. And the third is a survey of institutions, in which schools themselves provide data about a range of things, such as school size, faculty numbers, funding, etc.

Thomson gets paid for its survey work, of course. But it also gets the ability to resell this data through its consulting business. And while there’s little clamour for their reputational survey data (its usefulness is more than slightly marred by the fact that Thomson’s disclosure about the geographical distribution of its survey responses is somewhat opaque) – there is demand for access for all that data that institutional research offices are providing them.

As Kris notes, this is a great business model for Thomson. THE is just prestigious enough that institutions feel they cannot say no to requests for data, thus ensuring a steady stream of data which is both unique and – perhaps more importantly – free. But if institutions which provide data to the system want any data out of this it again, they have to pay.

(Before any of you can say it: HESA’s arrangement with the Globe and Mail is different in that nobody is providing us with any data. Institutions help us survey students and in return we provide each institution with its own results. The Thomson-THE data is more like the old Maclean’s arrangement with money-making sidebars).

There is a way to change this. In the United States, continued requests for data from institutions resulted in the creation of a Common Data Set (CDS); progress on something similar has been more halting in Canada (some provincial and regional ones exist but we aren’t yet quite there nationally). It’s probably about time that some discussions began on an international CDS. Such a data set would both encourage more transparency and accuracy in the data, and it would give institutions themselves more control over how the data was used.

The problem, though, is one of co-ordination: the difficulties of getting hundreds of institutions around the world to co-operate should not be underestimated. If a number of institutional alliances such as Universitas 21 and the Worldwide Universities Network, as well as the International Association of Universities and some key university associations were to come together, it could happen. Until then, though, Thomson is sitting on a tidy money-earner.

While you could argue about the pros and cons of the idea of creating a ‘global common data set,’ including the likelihood of one coming into place, what Alex Usher is also implying is that there is a distinct lack of governance regarding world university rankers. Why are universities so anemic when it comes to this issue, and why are higher education associations not filling the governance space neglected by key national governments and international organizations? One answer is that their own individual self-interest has them playing the game as long as they are winning. Another possible answer is that they have not thought through the consequences, or really challenged themselves to generate an alternative. Another is that the ‘institutional research’ experts (e.g., those represented by the Association for Institutional Research in the case of the US) have not focused their attention on the matter. But whatever the answer, at the very least, we think that they at least need to be posing themselves a set of questions. And if it’s not going to happen now, when will it? Only after MIT demonstrates some high profile global leadership on this issue, perhaps with Harvard, like it did with MITx and edX?

Kris Olds & Susan L. Robertson

The Business Side of World University Rankings

Over the last two years I’ve made the point numerous times here that world university rankings have become normalized on an annual cycle, and function as data acquisition mechanisms to drill deep into universities but in a way that encourages (seduces?) universities to provide the data for free. In reality, the data is provided at a cost given that the staff time allocated to produce the data needs to be paid for, and allocating staff time this way generates opportunity costs.

See below for the latest indicator of the business side of world university rankings. Interestingly today’s press release from Thomson Reuters (reprinted in full) makes no mention of world university rankings, nor Times Higher Education, the media outlet owned by TSL Education, which was itself acquired by Charterhouse Capital Partners in 2007. Recall that it was that Times Higher Education began working with Thomson Reuters in 2010.

The Institutional Profiles™ that are being marketed here derive data from “a combination of citation metrics from Web of KnowledgeSM, biographical information provided by institutions, and reputational data collected by Thomson Reuters Academic Reputation Survey,” all of which (apart form the citation metrics) come to the firm via the ‘Times Higher Education World University Rankings (powered by Thomson Reuters).’

Of course there is absolutely nothing wrong with providing services (for a charge) to enhance the management of universities, but would most universities (and their funding agencies) agree, from the start, to the establishment of a relationship where all data is provided for free to a centralized private authority headquartered in the US and UK, and then have this data both managed and monetized by the private authority? I’m not so sure.

This is arguably another case of universities thinking for themselves and not looking at the bigger picture. We have a nearly complete absence of collective action on this kind of developmental dynamic; one worthy of greater attention, debate, and oversight if not formal governance.

Kris Olds

<><><><>

12 Apr 2012

Thomson Reuters Improves Measurement of Universities’ Performance with New Data on Faculty Size, Reputation, Funding and Citation Measures

Comprehensive data now available in Institutional Profiles for universities such as Princeton, McGill, Nanyang Technological, University of Hong Kong and others

Philadelphia, PA, April 12, 2012 – The Intellectual Property & Science business of Thomson Reuters today announced the availability of 138 percent more performance indicators and nearly 20 percent more university data within Institutional Profiles™, the company’s online resource covering more than 500 of the world’s leading academic research institutions. This new data enables administrators and policy makers to reliably measure their institution’s performance and make international comparisons.

Using a combination of citation metrics from Web of KnowledgeSM, biographical information provided by institutions, and reputational data collected by Thomson Reuters Academic Reputation Survey, Institutional Profiles provides details on faculty size, student body, reputation, funding, and publication and citation data.

Two new performance indicators were also added to Institutional Profiles: International Diversity and Teaching Performance. These measure the global composition of staff and students, international co-authorship, and education input/output metrics, such as the ratio of students enrolled to degrees awarded in the same area. The indicators now cover 100 different areas, ensuring faculty and administrators have the most complete institutional data possible.

All of the data included in the tool has been vetted and normalized for accuracy. The latest update also includes several enhancements to existing performance indicators, such as Normalized Citation Impact. This allows for equally weighted comparisons between subject groups that have varying levels of citations.

“Institutional Profiles continues to provide answers to the questions that keep administrators up at night: ‘Beyond citation impact or mission statement, which institutions are the best collaboration partners for us to pursue? How can I understand the indicators and data that inform global rankings?’,” said Keith MacGregor, executive vice president at Thomson Reuters. “With this update, the tool provides the resources to reliably measure and compare academic and research performance in new and more complete ways, empowering strategic decision-making based on each institution’s unique needs.”

Institutional Profiles, a module within the InCites™ platform, is part of the research analytics suite of solutions provided by Thomson Reuters that supports strategic decision making and the evaluation and management of research. In addition to InCites, this suite of solutions includes consulting services, custom studies and reports, and Research in View™.

For more information, go to:
http://researchanalytics.thomsonreuters.com/institutionalprofiles/

About Thomson Reuters
Thomson Reuters is the world’s leading source of intelligent information for businesses and professionals. We combine industry expertise with innovative technology to deliver critical information to leading decision makers in the financial and risk, legal, tax and accounting, intellectual property and science and media markets, powered by the world’s most trusted news organization. With headquarters in New York and major operations in London and Eagan, Minnesota, Thomson Reuters employs approximately 60,000 people and operates in over 100 countries. For more information, go to http://www.thomsonreuters.com.

Contacts

Alyssa Velekei
Public Relations Specialist
Tel: +1 215 823 1894

Why now? Making markets via the THE World Reputation Rankings

The 2012 Times Higher Education (THE) World Reputation Rankings were released at 00.01 on 15 March by Times Higher Education via its website. It was intensely promoted via Twitter by the ‘Energizer Bunny’ of rankings, Phil Baty, and will be circulated in hard copy format to the magazine’s subscribers.

As someone who thinks there are more cons than pros related to the rankings phenomenon, I could not resist examining the outcome, of course! See below and to the right for a screen grab of the Top 20, with Harvard demolishing the others in the reputation standings.

I do have to give Phil Baty and his colleagues at Times Higher Education and Thomson Reuters credit for enhancing the reputation rankings methodology. Each year their methodology gets better and better.

But, and this is a big but, I have to ask myself why is the reputation ranking coming out on 15 March 2012 when the when the survey was distributed in April/May 2011 and when the data was used in the 2011 World University Rankings, which were released in October 2011? It is not like the reputation outcome presented here is complex. The timing makes no sense, whatsoever, from an analytical angle.

However, if we think about the business of rankings, versus analytical cum methodological questions, the release of the ‘Reputation Rankings’ makes absolute sense.

First, the release of the reputation rankings now keeps the rankings agenda, and Times Higher Education/Thomson Reuters, elevated in both higher education and mass media outlets. The media coverage unfolding as you read this particular entry would not be emerging if the reputation rankings were bundled into the general World University Rankings that were released back in October. It is important to note that QS has adopted the same ‘drip drip’ approach with the release of field-specific ranking outcomes, regional ranking outcomes, etc. A single annual blast in today’s ‘attention economy’ is never enough for world university rankers.

Second, and on a related note, the British Council’s Going Global 2012 conference is being held in London from 13-15 March. As the British Council put it:

More than five hundred university Presidents, Vice-Chancellors and sector leaders will be among the 1300 delegates to the British Council’s ‘Going Global 2012’ conference in March.

The conference will be the biggest ever gathering of higher education leaders. More than 80 countries will be represented, as leaders from government, academia and industry debate a new vision of international education for the 21st century.

The Times Higher Education magazine is released every Thursday (so 15 March this week), and so this event provides the firms of TSL Education Ltd., and Thomson Reuters with a captive audience of ‘movers and shakers’ for their products, and associated advertising. Times Higher Education is also an official media partner for Going Global 2012.

Make no mistake about it – there is an economic logic to releasing the reputation rankings today, and this trumps an analytical logic that should have led Times Higher Education to release the reputation outcome back in October so we could all better understand the world university ranking outcome and methodology.

More broadly, there really is no logic to the annual cycle of world rankings; if there were, funding councils worldwide would benchmark annually. But there is a clear business logic to normalizing the annual cycle of world university rankings, and this has indeed become the ‘new normal.’ But even this is not enough. Much like the development and marketing of running shoes, iPods, and fashion accessories, the informal benchmarking that has always gone on in academia has become formalized, commercialized, and splintered into distinct and constantly emerging products.

In the end, it is worth reflecting if such rankings are improving learning and research outcomes, as well as institutional innovation. And it is worth asking if the firms behind such rankings are themselves as open and transparent about their practices and agendas as they expect their research subjects (i.e. universities) to be.

Now back to those rankings. Congrats, Harvard!  But more importantly, I wonder if UW-Madison managed to beat Michigan…….oh oh.

Kris Olds

On being seduced by The World University Rankings (2011-12)

Well, it’s ranking season again, and the Times Higher Education/Thomson Reuters World University Rankings (2011-2012) has just been released. The outcome is available here, and a screen grab of the Top 25 universities is available to the right. Link here for a pre-programmed Google News search for stories about the topic, and link here for Twitter-related items (caught via the #THEWUR hash tag).

Polished up further after some unfortunate fall-outs from last year, this year’s outcome promises to give us an all improved, shiny and clean result. But is it?

Like many people in the higher education sector, we too are interested in the ranking outcomes, not that there are many surprises, to be honest.

Rather, what we’d like to ask our readers to reflect on is how the world university rankings debate is configured. Configuration elements include:

  • Ranking outcomes: Where is my university, or the universities of country X, Y, and Z, positioned in a relative sense (to other universities/countries; to peer universities/countries; in comparison to last year; in comparison to an alternative ranking scheme)?
  • Methods: Is the adopted methodology appropriate and effective? How has it changed? Why has it changed?
  • Reactions: How are key university leaders, or ministers (and equivalents) reacting to the outcomes?
  • Temporality: Why do world university rankers choose to release the rankings on an annual basis when once every four or five years is more appropriate (given the actual pace of change within universities)? How did they manage to normalize this pace?
  • Power and politics: Who is producing the rankings, and how do they benefit from doing so? How transparent are they themselves about their operations, their relations (including joint ventures), their biases, their capabilities?
  • Knowledge production: As is patently evident in our recent entry ‘Visualizing the uneven geographies of knowledge production and circulation,’ there is an incredibly uneven structure to the production of knowledge, including dynamics related to language and the publishing business.  Given this, how do world university rankings (which factor in bibliometrics in a significant way) reflect this structural condition?
  • Governance matters: Who is governing whom? Who is being held to account, in which ways, and how frequently? Are the ranked capable of doing more than acting as mere providers of information (for free) to the rankers? Is an effective mechanism needed for regulating rankers and the emerging ranking industry? Do university leaders have any capability (none shown so far!) to collaborate on ranking governance matters?
  • Context(s): How do schemes like the THE’s World University Rankings, the Academic Ranking of World Universities (ARWU), and the QS World University Rankings, relate to broader attempts to benchmark higher education systems, institutions, and educational and research practices or outcomes? And here we flag the EU’s new U-Multirank scheme, and the OECD’s numerous initiatives (e.g., AHELO) to evaluate university performance globally, as well as engender debate about benchmarking too. In short, are rankings like the ones just released ‘fit for purpose’ in genuinely shed light on the quality, relevance and efficiency of higher education in a rapidly-evolving global context?

The Top 400 outcomes will and should be debated, and people will be curious about the relative place of their universities in the ranked list, as well as about the welcome improvements evident in the THE/Thomson Reuters methodology. But don’t be invited into distraction and only focus on some of these questions, especially those dealing with outcomes, methods, and reactions.

Rather, we also need to ask more hard questions about power, governance, and context, not to mention interests, outcomes, and potential collateral damage to the sector (when these rankings are released and then circulate into national media outlets, and ministerial desktops). There is a political economy to world university rankings, and these schemes (all of them, not just the THE World University Rankings) are laden with power and generative of substantial impacts; impacts that the rankers themselves often do not hear about, nor feel (e.g., via the reallocation of resources).

Is it not time to think more broadly, and critically, about the big issues related to the great ranking seduction?

Kris Olds & Susan Robertson

‘Hotspots’ and international scientific collaboration

The OECD Science, Technology and Industry Scoreboard 2011: Innovation and Growth in Knowledge Economies report was released on 20 September.  While I’ve only seen the summary (which is the source for the first three images below) and an informative entry (‘A Changing Landscape: University hotspots for science and technology‘) in the OECD’s Education Today weblog, it is interesting to see a now common pattern and message emerging in these types of reports, and in a series of like-minded conferences, workshops, and associated reports (e.g. the Royal Society’s excellent Knowledge, Networks and Nations: Global Scientific collaboration in the 21st century, March 2011):

(a) relative stasis or decline in the OECD member countries (though they still do dominate, and will for decades to come);

(b) relatively fast growth within the so-called BRIC countries; and

(c) increased international collaboration, both as outcome and as aspiration.

And it is the aspiration for international collaboration that is particularly fascinating to ponder, for these types of scoreboards — analytical benchmarking cum geostrategic reframing exercises really — help produce insights on the evolving ‘lie of the land,’ while also flagging the ideal target spaces (countries, regions, institutions) for prospective future collaboration. National development processes and patterns thus drive change, but they interact in fascinating ways with the international collaborative process, which drives more international collaboration, and on it goes. As Alessandra Colecchia of the OECD puts it:

What does this [the changing landscape, and emerging ‘hotspots’] mean and why is it important? As students and researchers become more mobile, new sets of elite universities outside of the US could materialize. Whether or not we call it the “Banyan” or “Bonsai” League is yet to be determined, but it is clear that OECD countries may no longer have the monopoly on scientific excellence in higher education.

Luckily for us, education is generally not a zero-sum game. When others gain important insights and breakthroughs in science and technology, the entire field benefits. So wherever you are in the world, you can wear your college sweatshirt with pride.

True, though questions remain about the principles/missions/agendas driving international collaboration. For example, there is an ongoing scramble in Europe and North America to link up with research-active Brazilian institutions of higher education; an issue nicely summarized in today’s OBHE story titled ‘Brazil leads the charge from Latin America.’

As noted in the fourth image below (which was extracted from the Royal Society’s Knowledge, Networks and Nations: Global Scientific collaboration in the 21st century), the nature of coauthor-based collaboration with Brazil is changing, with some countries edging closer because scholar-to-scholar ties are deepening or thinning. The reconfiguration is most likely deepening from 2008 on as a slew of new policies, programs and projects get promoted and funded in both Brazil and actual or potential partner countries.

Some of the questions that come to my mind, after participating in some workshops where relations with Brazil are discussed include:

  • What values drive these new initiatives to reach out across space into and out of Brazil?
  • What disciplines are factored in (or not), and what types of researchers (junior? senior? elite? emerging?) get supported?
  • What languages are they dependent upon, and what languages will they indirectly promote?
  • Are these international collaboration drives built on the principle of ‘you are only as strong as your weakest link’ (i.e. an exclusive one), or are they attendant to the need for capacity building and longer time horizons for knowledge development?
  • Are these international collaboration drives built upon implicit and explicit principles of reciprocity, or otherwise?
  • What about the territorial dimensions of the development process? Will we see hotspot to ’emerging hotspot’ linkages deepen, or will hotspots be linked up with non-hotspots and if so how, and why? Can an archipelago-like landscape of linked up hotspots ‘serve’ nations/regions/the world, or is it generative of exclusionary developmental tendencies?

These are but a few of many questions to ponder as we observe, and jointly construct, emerging ‘hotspots’ in the global higher education and research landscape.

Kris Olds

~~~~~~~~~~~~~~~

~~~~~

~~~~~

~~~~

Note: the first three images were extracted from the OECD Science, Technology and Industry Scoreboard 2011: Innovation and Growth in Knowledge Economies (Sept 2011). The fourth image was extracted from the Royal Society’s Knowledge, Networks and Nations: Global Scientific collaboration in the 21st century (March 2011).

Field-specific cultures of international research collaboration

Editors’ note: how can we better understand and map out the phenomenon of international research collaboration, especially in a context where bibliometrics does a patchy job with respect to registering the activities and output of some fields/disciplines? This is one of the questions Dr. Heike Jöns (Department of Geography, Loughborough University, UK) grapples with in this informative guest entry in GlobalHigherEd. The entry draws from Dr. Jöns’ considerable experience studying forms of mobility associated with the globalization of higher education and research.

Dr. Jöns (pictured above) received her PhD at the University of Heidelberg (Germany) and spent two years as a Feodor Lynen Postdoctoral Research Fellow of the Alexander von Humboldt Foundation at the University of Nottingham (UK). She is interested in the geographies of science and higher education, with particular emphasis on transnational academic mobility.

Further responses to ‘Understanding international research collaboration in the social sciences and humanities’, and Heike Jöns’ response below, are welcome at any time.

Kris Olds & Susan Robertson

~~~~~~~~~~~~~~~~~~~~~

The evaluation of research performance at European universities increasingly draws upon quantitative measurements of publication output and citation counts based on databases such as ISI Web of Knowledge, Scopus and Google Scholar (UNESCO 2010). Bibliometric indicators also inform annually published world university rankings such as the Shanghai and Times Higher Education rankings that have become powerful agents in contemporary audit culture despite their methodological limitations. Both league tables introduced field-specific rankings in 2007, differentiating between the natural, life, engineering and social sciences (both rankings), medicine (Shanghai) and the arts and humanities (Times Higher).

But to what extent do bibliometric indicators represent research output and collaborative cultures in different academic fields? This blog entry responds to this important question raised by Kris Olds (2010) in his GlobalHigherEd entry titled ‘Understanding international research collaboration in the social sciences and humanities‘ by discussing recent findings on field-specific research cultures from the perspective of transnational academic mobility and collaboration.

The inadequacy of bibliometric data for capturing research output in the arts and humanities has, for example, been demonstrated by Anssi Paasi’s (2005) study of international publishing spaces. Decisions about the journals that enter the respective databases, their bias towards English-language journals and their neglect of monographs and anthologies that dominate in fields dominated by individual authorship are just a few examples for the reasons of why citation indexes are not able to capture the complexity, place- and language-specificity of scholarship in the arts and humanities. Mapping the international publishing spaces in the sciences, the social sciences and the arts and humanities using ISI Web of Science data in fact suggests that the arts and humanities are less international and even more centred on the United States and Europe than the sciences (Paasi 2005: 781). Based on the analysis of survey data provided by 1,893 visiting researchers in Germany in the period 1954 to 2000, this GlobalHigherEd entry aims to challenge this partial view by revealing the hidden dimensions of international collaboration in the arts and humanities and elaborating on why research output and collaborative cultures vary not only between disciplines but also between different types of research work (for details, see Jöns 2007; 2009).

The visiting researchers under study were funded by the Humboldt Research Fellowship Programme run by the Alexander von Humboldt Foundation (Bonn, Germany). They came to Germany in order to pursue a specific research project at one or more host institutions for about a year. Striking differences in collaborative cultures by academic field and type of research work are revealed by the following three questions:

1. Could the visiting researchers have done their research project also at home or in any other country?

2. To what extent did the visiting researchers write joint publications with colleagues in Germany as a result of their research stay?

3. In which ways did the collaboration between visiting researchers and German colleagues continue after the research stay?

On question 1.

Research projects in the arts and humanities, and particularly those that involved empirical work, were most often tied to the research context in Germany. They were followed by experimental and theoretical projects in engineering and in the natural sciences, which were much more frequently possible in other countries as well (Figure 1).

Figure 1 — Possibility of doing the Humboldt research project in another country than Germany, 1981–2000 (Source: Jöns 2007: 106)

These differences in place-specificity are closely linked to different possibilities for mobilizing visiting researchers on a global scale. For example, the establishment of new research infrastructure in the physical, biological and technical sciences can easily raise scientific interest in a host country, whereas the mobilisation of new visiting researchers in the arts and humanities remains difficult as language skills and cultural knowledge are often necessary for conducting research projects in these fields. This is one reason for why the natural and technical sciences appear to be more international than the arts and humanities.

On question 2.

Joint publications with colleagues in Germany were most frequently written in physics, chemistry, medicine, engineering and the biological sciences that are all dominated by multi-authorship. Individual authorship was more frequent in mathematics and the earth sciences and most popular – but with considerable variations between different subfields – in the arts and humanities. The spectrum ranged from every second economist and social scientist, who wrote joint publications with colleagues in Germany, via roughly one third in language and cultural studies and history and every fifth in law to only every sixth in philosophy. Researchers in the arts and humanities had much more often than their colleagues from the sciences stayed in Germany for study and research prior to the Humboldt research stay (over 95% in the empirical arts and humanities compared to less than 40% in the theoretical technical sciences) as their area of specialisation often required learning the language and studying original sources or local research subjects. They therefore engaged much more closely with German language and culture than natural and technical scientists but due to the great individuality of their work, they produced not only considerably less joint publications than their apparently more international colleagues but their share of joint publications with German colleagues before and after the research stay was fairly similar (Figure 2).

Figure 2 — Joint publications of Humboldt research fellows and colleagues in Germany, 1981–2000 (Source: Jöns 2007: 107)

For these reasons, internationally co-authored publications are not suitable for evaluating the international attractiveness and orientation of different academic fields, particularly because the complexity of different types of research practices in one and the same discipline makes it difficult to establish typical collaborative cultures against which research output and collaborative linkages could be judged.

On question 3.

This is confirmed when examining continued collaboration with colleagues in Germany after the research stay. The frequency of continued collaboration did not vary significantly between disciplines but the nature of these collaborations differed substantially. Whereas regular collaboration in the natural and technical sciences almost certainly implied the publication of multi-authored articles in internationally peer-reviewed journals, continued interaction in the arts and humanities, and to a lesser extent in the social sciences, often involved activities beyond the co-authorship of journal articles. Table 1 documents some of these less well-documented dimensions of international research collaboration, including contributions to German-language scientific journals and book series as well as refereeing for German students, researchers and the funding agencies themselves.



Table 1 — Activities of visiting researchers in Germany after their research stay (in % of Humboldt research fellows 1954-2000; Source: Jöns 2009: 327)

The differences in both place-specificity and potential for co-authorship in different research practices can be explained by their particular spatial ontology. First, different degrees of materiality and immateriality imply varying spatial relations that result in typical patterns of place-specificity and ubiquity of research practices as well as of individual and collective authorship. Due to the corporeality of researchers, all research practices are to some extent physically embedded and localised. However, researchers working with physically embedded material research objects that might not be moved easily, such as archival material, field sites, certain technical equipment, groups of people and events, may be dependent on accessing a particular site or local research context at least once. Those scientists and scholars, who primarily deal with theories and thoughts, are in turn as mobile as the embodiment of these immaterialities (e.g., collaborators, computers, books) allows them to be. Theoretical work in the natural sciences, including, for example, many types of mathematical research, thus appears to be the most ‘ubiquitous’ subject: Its high share of immaterial thought processes compared to relatively few material resources involved in the process of knowledge production (sometimes only pen and paper) would often make it possible, from the perspective of the researchers, to work in a number of different places (Figure 1, above).

Second, the constitutive elements of research vary according to their degree of standardisation. Standardisation results from the work and agreement previously invested in the classification and transformation of research objects. A high degree of standardisation would mean that the research practice relies on many uniform terms, criteria, formulas and data, components and materials, methods, processes and practices that are generally accepted in the particular field of academic work. Field sites, for example, might initially show no signs of standardisation, whereas laboratory equipment such as test tubes may have been manufactured on the basis of previous – and then standardised – considerations and practices. The field site may be unique, highly standardised laboratory equipment may be found at several sites to which the networks of science have been extended, thereby offering greater flexibility in the choice of the research location. In regard to research practices with a higher degree of immateriality, theoretical practices in the natural and technical sciences show a higher degree of standardisation (e.g., in terms of language) when compared to theoretical and argumentative-interpretative work in the arts and humanities and thus are less place-specific and offer more potential for co-authorship (Figures 1 and 2).

The resulting two dimensional matrix on the spatial relations of different research practices accommodates the empirically observed differences of both the place-specificity of the visiting researchers’ projects and their resulting joint publications with colleagues in Germany (Figure 3):

Figure 3 — A two-dimensional matrix on varying spatial relations of different research practices (Source: Jöns 2007: 109)

Empirical work, showing a high degree of materiality and a low degree of standardisation, is most often dependent on one particular site, followed by argumentative-interpretative work, which is characterised by a similar low degree of standardisation but a higher degree of immateriality. Experimental (laboratory) work, showing a high degree of both materiality and standardisation, can often be conducted in several (laboratory) sites, while theoretical work in the natural sciences, involving both a high degree of immateriality and standardisation is most rarely tied to one particular site. The fewest joint publications were written in argumentative-interpretative work, where a large internal (immaterial) research context and a great variety of arguments from different authors in possibly different languages complicate collaboration on a specific topic. Involving an external (material) and highly standardised research context, the highest frequency of co- and multi-authorship was to be found in experimental (laboratory) work. In short, the more immaterial and standardised the research practice, the lower is the place-specificity of one’s work and the easier it would be to work at home or elsewhere; and the more material and standardised the research practice, the more likely is collaboration through co- and multi-authorship.

Based on this work, it can be concluded – in response to two of Kris Olds’ (2010) key questions – that international research collaboration on a global scale can be mapped – if only roughly – for research practices characterised by co- and multi-authorship in internationally peer-reviewed English language journals as the required data is provided by citation databases (e.g., Wagner and Leydesdorff 2005; Adams et al. 2007; Leydesdorff and Persson 2010; Matthiessen et al. 2010; UNESCO 2010). When interpreting such mapping exercises, however, one needs to keep in mind that the data included in ISI Web of Knowledge, Scopus and Google Scholar do itself vary considerably.

Other research practices require different research methods such as surveys and interviews and thus can only be mapped from specific perspectives such as individual institutions or groups of researchers (for the application of bibliometrics to individual journals in the arts and humanities, see Leydesdorff and Salah 2010). It might be possible to create baseline studies that help to judge the type and volume of research output and international collaboration against typical patterns in a field of research but the presented case study has shown that the significance of specific research locations, of individual and collective authorship, and of different types of transnational collaboration varies not only between academic fields but also between research practices that crisscross conventional disciplinary boundaries.

In the everyday reality of departmental research evaluation this means that in fields such as geography, a possible benchmark of three research papers per year may be easily produced in most fields of physical geography and some fields of human geography (e.g. economic and social) whereas the nature of research practices in historical and cultural geography, for example, might make it difficult to maintain such a high research output over a number of subsequent years. Applying standardised criteria of research evaluation to the great diversity of publication and collaboration cultures inevitably bears the danger of leading to a standardisation of academic knowledge production.

Heike Jöns

References

Adams J, Gurney K and Marshall S 2007 Patterns of international collaboration for the UK and leading partners Evidence Ltd., Leeds

Jöns H 2007 Transnational mobility and the spaces of knowledge production: a comparison of global patterns, motivations and collaborations in different academic fields Social Geography 2 97-114  Accessed 23 September 2010

Jöns H 2009 ‘Brain circulation’ and transnational knowledge networks: studying long-term effects of academic mobility to Germany, 1954–2000 Global Networks 9 315-38

Leydesdorff L and Persson O 2010 Mapping the geography of science: distribution patterns and networks of relations among cities and institutes Journal of the American Society for Information Science & Technology 6 1622-1634

Leydesdorff L and Salah A A A 2010 Maps on the basis of the Arts &Humanities Citation Index: the journals Leonardo and Art Journal, and “Digital Humanities” as a topic Journal of the American Society for Information Science and Technology 61 787-801

Matthiessen C W, Schwarz A W and Find S 2010 World cities of scientific knowledge: systems, networks and potential dynamics. An analysis based on bibliometric indicators Urban Studies 47 1879-97

Olds K 2010 Understanding international research collaboration in the social sciences and humanities GlobalHigherEd 20 July 2010  Accessed 23 September 2010

Paasi A 2005 Globalisation, academic capitalism, and the uneven geographies of international journal publishing spaces Environment and Planning A 37 769-89

UNESCO 2010 World Social Science Report: Knowledge Divides UNESCO, Paris

Wagner C S and Leydesdorff L 2005 Mapping the network of global science: comparing international co-authorships from 1990 to 2000 International Journal of Technology and Globalization 1 185–208


Governing world university rankers: an agenda for much needed reform

Is it now time to ensure that world university rankers are overseen, if not governed, so as to achieve better quality assessments of the differential contributions of universities in the global higher education and research landscape?

In this brief entry we make a case that something needs to be done about the system in which world university rankers operate. We have two brief points to make about why something needs to be done, and then we outline some options for moving beyond today’s status quo situation.

First, while both universities and rankers are all interested in how well universities are positioned in the emerging global higher education landscape, power over the process, as currently exercised, rests solely with the rankers. Clearly firms like QS and Times Higher Education are open to input, advice, and indeed critique, but in the end they, along with information services firms like Thomson Reuters, decide:

  • How the methodology is configured
  • How the methodology is implemented and vetted
  • When and how the rankings outcomes are released
  • Who is permitted access to the base data
  • When and how errors are corrected in rankings-related publications
  • What lessons are learned from errors
  • How the data is subsequently used

Rankers have authored the process, and universities (not to mention associations of universities, and ministries of education) have simply handed over the raw data. Observers of this process might be forgiven for thinking that universities have acquiesced to the rankers’ desires with remarkably little thought. How and why we’ve ended up in such a state of affairs is a fascinating (if not alarming) indicator of how fearful many universities are of being erased from increasingly mediatized viewpoints, and how slow universities and governments have been in adjusting to the globalization of higher education and research, including the desectoralization process. This situation has some parallels with the ways that ratings agencies (e.g., Standard and Poor’s or Moody’s) have been able to operate over the last several decades.

Second, and as has been noted in two of our recent entries:

the costs associated with providing rankers (especially QS and THE/Thomson Reuters) with data are increasing concentrated on universities.

On a related note, there is no rationale for the now annual rankings cycle that the rankers have been successfully been able to normalize. What really changes on a year-to-year basis apart from changes in ranking methodologies? Or, to paraphrase Macquarie University’s vice-chancellor, Steven Schwartz, in this Monday’s Sydney Morning Herald:

“I’ve never quite adjusted myself to the idea that universities can jump around from year to year like bungy jumpers,” he says.

”They’re like huge oil tankers; they take forever to turn around. Anybody who works in a university realises how little they change from year to year.”

Indeed if the rationale for an annual cycle of rankings were so obvious, government ministries would surely facilitate more annual assessment exercises. Even the most managerial and bibliometric-predisposed of governments anywhere – in the UK – has spaced its intense research assessment exercise out over a 4-6 year cycle. And yet the rankers have universities on the run. Why? Because this cycle facilitates data provision for commercial databases, and it enables increasingly competitive rankers to construct their own lucrative markets. This, perhaps, explains this 6 July 2010 reaction, from QS to a call for a four vs one year rankings cycle in GlobalHigherEd:

Thus we have a situation where rankers seeking to construct media/information service markets are driving up data provision time and costs for universities, facilitating continual change in methodologies, and as a matter of consequence generating some surreal swings in ranked positions. Signs abound that rankers are driving too hard, taking too many risks, while failing to respect universities, especially those outside of the upper echelon of the rank orders.

Assuming you agree that something should happen, the options for action are many. Given what we know about the rankers, and the universities that are ranked, we have developed four options, in no order of priority, to further discussion on this topic. Clearly there are other options, and we welcome alternative suggestions, as well as critiques of our ideas below.

The first option for action is the creation of an ad-hoc task force by 2-3 associations of universities located within several world regions, the International Association of Universities (IAU), and one or more international consortia of universities. Such an initiative could build on the work of the European University Association (EAU) which created a regionally-specific task force in early 2010. Following an agreement to halt world university rankings for two years (2011 & 2012), this new ad-hoc task force could commission a series of studies regarding the world university rankings phenomenon, not to mention the development of alternative options for assessing, benchmarking and comparing higher education performance and quality. In the end the current status quo regarding world university rankings could be sanctioned, but such an approach could just as easily lead to new approaches, new analytical instruments, and new concepts that might better shed light on the diverse impacts of contemporary universities.

A second option is an inter-governmental agreement about the conditions in which world university rankings can occur. This agreement could be forged in the context of bi-lateral relations between ministers in select countries: a US-UK agreement, for example, would ensure that the rankers reform their practices. A variation on this theme is an agreement of ministers of education (or their equivalent) in the context of the annual G8 University Summit (to be held in 2011), or the next Global Bologna Policy Forum (to be held in 2012) that will bring together 68+ ministers of education.

The third option for action is non-engagement, as in an organized boycott. This option would have to be pushed by one or more key associations of universities. The outcome of this strategy, assuming it is effective, is the shutdown of unique data-intensive ranking schemes like the QS and THE world university rankings for the foreseeable future. Numerous other schemes (e.g., the new High Impact Universities) would carry on, of course, for they use more easily available or generated forms of data.

A fourth option is the establishment of an organization that has the autonomy, and resources, to oversee rankings initiatives, especially those that depend upon university-provided data. There are no such organizations in existence for the only one that is even close to what we are calling for (the IREG Observatory on Academic Ranking and Excellence) suffers from the inclusion of too many rankers on its executive committee (a recipe for serious conflicts of interest), and member fees for a significant portion of its budget (ditto).

In closing, the acrimonious split between QS and Times Higher Education, and the formal inclusion of Thomson Reuters into the world university ranking world, has elevated this phenomenon to a new ‘higher-stakes’ level. Given these developments, given the expenses associated with providing the data, given some of the glaring errors or biases associated with the 2010 rankings, and given the problems associated with using university-scaled quantitative measures to assess ‘quality’ in a relative sense, we think it is high time for some new forms of action. And by action we don’t mean more griping about methodology, but attention to the ranking system that universities are embedded in, yet have singularly failed to construct.

The current world university rankings juggernaut is blinding us, yet innovative new assessment schemes — schemes that take into account the diversity of institutional geographies, profiles, missions, and stakeholders — could be fashioned if we take pause. It is time to make more proactive decisions about just what types of values and practices should be underlying comparative institutional assessments within the emerging global higher education landscape.

Kris Olds, Ellen Hazelkorn & Susan Robertson

Rankings: a case of blurry pictures of the academic landscape?

Editors’ note: this guest entry has been kindly contributed by Pablo Achard (University of Geneva).  After a PhD in particle physics at CERN and the University of Geneva (Switzerland), Pablo Achard (pictured to the right) moved to the universities of Marseilles (France) then Antwerp (Belgium) and Brandeis (MA) to pursue research in computational neurosciences. He currently works at the University of Geneva where he supports the Rectorate on bibliometrics and strategic planning issues. Our thanks to Dr. Achard for this ‘insiders’ take on the challenges of making sense of world university rankings. 

Kris Olds & Susan Robertson

~~~~~~~~~~~~~~

If the national rankings of universities can be traced back in the 19th century, international rankings appeared somewhere in the beginning of the 21st century [1]. Shanghai Jiao Tong University’s and Times Higher Education’s (THE) rankings were among the pioneers and remain among the most visible ones. But you might have heard of similar league tables designed by the CSIC, the University of Leiden, the HEEACT, QS, the University of Western Australia, RatER, Mines Paris Tech, etc. Such a proliferation certainly responds to a high demand. But what are they worth? I argue here that rankings are blurry pictures of the academic landscape. As such, they are much better than complete blindness but should be used with great care.

Blurry pictures

The image of the academic landscape grabbed by the rankings is always a bit out-of-focus. This is improving with time and we should acknowledge the rankers who make considerable efforts to improve the sharpness. Nonetheless, the sharp image remains an impossible to reach ideal.

First of all, it is very difficult to get clean and comparable data on such a large scale. The reality is always grey, the action of counting is black or white. Take such a central element as a “researcher”. What should you count? Heads or full-time equivalents? Full-time equivalents based on their contracts or the effective time spent at the university? Do you include PhD “students”? Visiting scholars? Professors on sabbaticals? Research engineers? Retired professors who still run a lab? Deans who don’t? What do you do with researchers affiliated with non-university research organizations still loosely connected to a university (think of Germany or France here)? And how do you collect the data?

This toughness to obtain clean and comparable data is the main reason for the lack of any good indicator about teaching quality. To do it properly, one would need to evaluate the level of knowledge of the students upon graduation, and possibly compare it with their level when they entered the university. To this aim, OECD is launching a project called AHELO, but it is still in its pilot phase. In the meantime, some rankers use poor proxies (like the percentage of international students) while others focus their attention on research outcomes only.

Second, some indicators are very sensitive to “noise” due to small statistics. This is the case for the number of Nobel prizes used by the Shanghai’s ranking. No doubt that having 20 of them in your faculty says something about its quality. But having one, obtained years ago, for a work partly or fully done elsewhere? Because of the long tailed distribution of the university rankings, such a unique event won’t push a university ranked 100 into the top 10, but a university ranked 500 can win more than a hundred places.

This dynamic seemed to occur in the most recent THE ranking. In their new methodology, the “citation impact” of a university counts for one third of the final note. Not many details were given on how this impact is calculated. But the description on the THE’s website and the way this impact is calculated by Thomson Reuters – who provides the data to THE – in its commercial product InCites. makes me believe that they used the so-called “Leiden crown indicator”. This indicator is a welcome improvement to the raw ratio of citations per publications since it takes into account the citation behaviours of the different disciplines. But it suffers from instability if you look at a small set of publications or at publications in fields where you don’t expect many citations [2]: the denominator can become very small, leading to rocket high ratios. This is likely what happened with the Alexandria University. According to this indicator, this Alexandria ranks 4th in the world, surpassed only by Caltech, MIT and Princeton. This is an unexpected result for anyone who knows the world research landscape [3].

Third, it is well documented that the act of measuring triggers the act of manipulating the measure. And this is made easy when the data are provided by the university themselves, as for the THE or QS rankings. One can only be suspicious when reading the cases emphasized by Bookstein and colleagues. “For whatever reason, the quantity THES assigned to the University of Copenhagen staff-student ratio went from 51 (the sample median) in 2007 to 100 (a score attained by only 12 other schools in the top 200) […] Without this boost, Copenhagen’s […] ranking would have been 94 instead of 51. Another school with a 100 student-staff rating in 2009, Ecole Normale Supérieure, Paris, rose from the value of 68 just a year earlier, […] thus earning a ranking of 28 instead of 48.”

Pictures of a landscape are taken from a given point of view

But let’s suppose that the rankers can improve their indicators to obtain perfectly focused images. Let’s imagine that we have clean, robust and hardly manipulable data to rely on. Would the rankings give a neutral picture of the academic landscape? Certainly not. There is no such thing as “neutrality” in any social construct.

Some rankings are built with a precise output in mind. The most laughable example of this was Mines Paris Tech’s ranking, placing itself and four other French “grandes écoles” in the top 20. This is probably the worst flaw of any ranking. But other types of biases are always present, even if less visible.

Most rankings are built with a precise question in mind. Let’s look at the evaluation of the impact of research. Are you interested in finding the key players, in which case the volume of citations is one way to go? Or are you interested in finding the most efficient institutions, in which case you would normalize the citations to some input (number of articles or number of researchers or budget)? Different questions need different indicators, hence different rankings. This is the approach followed by Leiden which publishes several rankings at a time. However this is not the sexiest and media-friendly approach.

Finally, all rankings are built with a model of what a good university is in mind. “The basic problem is that there is no definition of the ideal university”, a point made forcefully today by University College London’s Vice-Chancellor. Often, the Harvard model is the implicit model. In this case, getting Harvard on top is a way to check for “mistakes” in the design of the methodology. But the missions of the university are many. One usually talks about the production (research) and the dissemination (teaching) of knowledge, together with a “third mission” towards society that can in turn have many different meanings, from the creation of spin-offs to the reduction of social inequities. For these different missions, different indicators are to be used. The salary of fresh graduates is probably a good indicator to judge MBAs and certainly a bad one for liberal art colleges.

To pursue the metaphor with photography, every single snapshot is taken from a given point of view and with a given aim. Point-of-views and aims can be visible as it is the case in artistic photography. They can also pretend to neutrality, as in photojournalism. But this neutrality is wishful thinking. The same applies for rankings.

Useful pictures

Rankings are nevertheless useful pictures. Insiders who have a comprehensive knowledge of the global academic landscape understandably laugh at rankings’ flaws. However the increase in the number of rankings and in their use tells us that they fill a need. Rankings can be viewed as the dragon of New Public Management and accountability assaulting the ivory tower of disinterested knowledge. They certainly participate to a global shift in the contract between society and universities. But I can hardly believe that the Times would spend thousands if not millions for such a purpose.

What then is the social use of rankings? I think they are the most accessible vision of the academic landscape for millions of “outsiders”. The CSIC ranks around 20,000 (yes twenty thousand!) higher education institutions. Who can expect everyone to be aware of their qualities?  Think of young students, employers, politicians or academics from not-so-well connected universities. Is everyone in the Midwest able to evaluate the quality of research at a school strangely named Eidgenössische Technische Hochschule Zürich?

Even to insiders, rankings tell us something. Thanks to improvements in the picture’s quality and to the multiplication of point-of-views, rankings form an image that is not uninteresting. If a university is regularly in the top 20, this is something significant. You can expect to find there one of the best research and teaching environment. If it is regularly in the top 300, this is also significant. You can expect to find one of the few universities where the “global brain market” takes place. If a country – like China – increases its share of good universities over time, this is significant and that a long-term ‘improvement’ (at least in the direction of what is being ranked as important) of its higher education system is under way.

Of course, any important decision concerning where to study, where to work or which project to embark on must be taken with more criteria than rankings. As one would never go for mountain climbing based solely on blurry snapshots of the mountain range, one should not use rankings as a unique source of information about universities.

Pablo Achard


Notes

[1] See The Great Brain Race. How Global Universities are Reshaping the World, Ben Wildavsky, Princeton Press 2010; and more specifically its chapter 4 “College rankings go global”.

[2] The Leiden researchers have recently decided to adopt a more robust indicator for their studies http://arxiv.org/abs/1003.2167 But whatever the indicator used, the problem will remain for small statistical samples.

[3] See recent discussions on the University Ranking Watch blog for more details on this issue.