On being seduced by The World University Rankings (2011-12)

Well, it’s ranking season again, and the Times Higher Education/Thomson Reuters World University Rankings (2011-2012) has just been released. The outcome is available here, and a screen grab of the Top 25 universities is available to the right. Link here for a pre-programmed Google News search for stories about the topic, and link here for Twitter-related items (caught via the #THEWUR hash tag).

Polished up further after some unfortunate fall-outs from last year, this year’s outcome promises to give us an all improved, shiny and clean result. But is it?

Like many people in the higher education sector, we too are interested in the ranking outcomes, not that there are many surprises, to be honest.

Rather, what we’d like to ask our readers to reflect on is how the world university rankings debate is configured. Configuration elements include:

  • Ranking outcomes: Where is my university, or the universities of country X, Y, and Z, positioned in a relative sense (to other universities/countries; to peer universities/countries; in comparison to last year; in comparison to an alternative ranking scheme)?
  • Methods: Is the adopted methodology appropriate and effective? How has it changed? Why has it changed?
  • Reactions: How are key university leaders, or ministers (and equivalents) reacting to the outcomes?
  • Temporality: Why do world university rankers choose to release the rankings on an annual basis when once every four or five years is more appropriate (given the actual pace of change within universities)? How did they manage to normalize this pace?
  • Power and politics: Who is producing the rankings, and how do they benefit from doing so? How transparent are they themselves about their operations, their relations (including joint ventures), their biases, their capabilities?
  • Knowledge production: As is patently evident in our recent entry ‘Visualizing the uneven geographies of knowledge production and circulation,’ there is an incredibly uneven structure to the production of knowledge, including dynamics related to language and the publishing business.  Given this, how do world university rankings (which factor in bibliometrics in a significant way) reflect this structural condition?
  • Governance matters: Who is governing whom? Who is being held to account, in which ways, and how frequently? Are the ranked capable of doing more than acting as mere providers of information (for free) to the rankers? Is an effective mechanism needed for regulating rankers and the emerging ranking industry? Do university leaders have any capability (none shown so far!) to collaborate on ranking governance matters?
  • Context(s): How do schemes like the THE’s World University Rankings, the Academic Ranking of World Universities (ARWU), and the QS World University Rankings, relate to broader attempts to benchmark higher education systems, institutions, and educational and research practices or outcomes? And here we flag the EU’s new U-Multirank scheme, and the OECD’s numerous initiatives (e.g., AHELO) to evaluate university performance globally, as well as engender debate about benchmarking too. In short, are rankings like the ones just released ‘fit for purpose’ in genuinely shed light on the quality, relevance and efficiency of higher education in a rapidly-evolving global context?

The Top 400 outcomes will and should be debated, and people will be curious about the relative place of their universities in the ranked list, as well as about the welcome improvements evident in the THE/Thomson Reuters methodology. But don’t be invited into distraction and only focus on some of these questions, especially those dealing with outcomes, methods, and reactions.

Rather, we also need to ask more hard questions about power, governance, and context, not to mention interests, outcomes, and potential collateral damage to the sector (when these rankings are released and then circulate into national media outlets, and ministerial desktops). There is a political economy to world university rankings, and these schemes (all of them, not just the THE World University Rankings) are laden with power and generative of substantial impacts; impacts that the rankers themselves often do not hear about, nor feel (e.g., via the reallocation of resources).

Is it not time to think more broadly, and critically, about the big issues related to the great ranking seduction?

Kris Olds & Susan Robertson

14 thoughts on “On being seduced by The World University Rankings (2011-12)

  1. Pingback: Ninth Level Ireland » Blog Archive » On being seduced by The World University Rankings (2011-12)

  2. Rankings publishers are to the globalised HE system what the credit rating agencies (S&P, Moodies, Fitch) are to the globalised financial services industry and we know how much use those services proved in the recent global financial meltdown. The deep conflicts of interest around apparently and avowedly disinterested, independent raters and the perverse impacts their products had on the system they were “objectively” described have proved profoundly destabilising. We should fear the same in HE.

  3. Pingback: EDU WATCH: Todai slips in world rankings but still top among Asian universities; Chemistry-savvy, angry teen gasses classmates; judo instructor found guilty for child’s death during training; bus returning from school field trip goes over cliff R

  4. The World University Rankings has many of the same issues as US News and World Report Rankings. Institutions sit on the edge of their seats waiting to see if they “make the list” and can promote themselves based on these rankings. The comments under “Ranking Outcomes” and “Reactions” could just as easily be about US News and World Report. Where do we draw the line. Is it more important to produce well-educated and critical thinking graduates or where do we place in the rankings. I think too many institutions can manipulate their data to move up the rankings. I think accountability is needed to validate these rankings.

  5. “Temporality: Why do world university rankers choose to release the rankings on an annual basis when once every four or five years is more appropriate (given the actual pace of change within universities)? How did they manage to normalize this pace?”

    Is there a real change from year to year at each institution, or, is it just a partial reshuffling of the deck?

  6. It is interesting that in the absence of other ways to compare institutions, these rankings, like the US News and World Report rankings are still considered valuable – even by those that ‘know better’.

  7. The information in this list is equivalent to the Top 100 Best Looking Males/Females of [insert year]. It is almost predictable to guess who is going to be on the list, and with enough alterations and social connections, it is possible for anyone to get included. Institutions can manipulate and adjust their data to present themselves more rank-able as well. Each institution has its good processes and there is always room for improvement. If each institution is held accountable to each area that it was ranked such as showing degree audits to ensure true graduation in accordance with the curriculum then the ranking can become more credible for rating institutions.

  8. It is interesting to see that the five headline catagories do not seem to include retention or graduation rates. In my masters studies this issue concerns many universities now and I would think the schools with the highest numbers should be in the top rankings too.

  9. The World University Rankings are a popularity contest. A contest where there are many “top dogs” waiting to step in and take first place. Those involved in the contest do whatever it takes to reach the top and move up that next ranking. It is all to easy for an institution to smudge some lines or change their data to reflect whats needed to move up in the rankings. I agree with Miss Owens that institutions need to be held accountable and some sort of assessment done yearly to qualify for the World University Rankings.

  10. What I believe is so sad is that those not involved in higher education do not know to look at these rankings with some suspension. The general public looks at these rankings as the gospel with no idea that the data that is supplied for these rankings is oftentimes manipulated.

  11. What are these rankings actually saying about the indidvidual institutions? Does it tell us that graduates will exit and immediately begin their careers? Do the rankings prove that the individual who entered the institution will make an impact of society after after graduation? There are tons of questions that can create debates in regards to the ranking of institutions of higher educaton. I firmly believe that the ranking of an institution does not speak of the quality of an institution. What your graduates are doing after they leave the institution and how well your institution prepared them for post-undergaduate life speaks of the quality of an institution and should be used to salidify these rankings. As a professional in the realm of higher education, I am constantly seeing it to become more political. I feel as if we (key university leaders) are getting away from the main purpose of education which is to educate.

  12. The author states that there is a certain seduction to the idea of a simple, straight-forward ranking. In my recent experiences, I’ve struggled with this seduction. I spent much of my winter and early spring determining which college I will be attending for my MFA in the future. I looked at three different colleges. Two were ranked much higher in ranking system like the THE ranking than the third, but I fell in love with the curriculum and progressive style of the third. I found myself almost convincing myself away from this third school, because of the lower rankings. After introspection, however, I’ve decided to follow my heart and gut feelings. Rankings are simply and easy, but taken out of context, they have no weight whatsoever.

  13. I note the question “Do university leaders have any capability (none shown so far!) to collaborate on ranking governance matters?” and guess by the paranthesized comment that the author bemoans such collaboration. Therefore – can the author or another responder share individual university benefits to cross-university rankings collaboration?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s