On the illogics of the Times Higher Education World Reputation Rankings (2013)

Note: you can link here for the Inside Higher Ed version of the same entry.

~~~~~~~~

Amidst all the hype and media coverage related to the just released Times Higher Education World Reputation Rankings (2013), it’s worth reflecting on just how small of a proportion of the world’s universities are captured in this exercise (see below). As I noted last November, the term ‘world university rankings’ does not reflect the reality of the exercise the rankers are engaged in; they only focus on a minuscule corner of the institutional ecosystem of the world’s universities.

The firms associated with rankings have normalized the temporal cycle of rankings despite this being an illogical exercise (unless you are interested in selling advertising space in a magazine and on a website).  As Alex Usher pointed out earlier today in ‘The Paradox of University Rankings‘ (and I quote in full):

By the time you read this, the Times Higher Education’s annual Reputation Rankings will be out, and will be the subject of much discussion on Twitter and the Interwebs and such.  Much as I enjoy most of what Phil Baty and the THE do, I find the hype around these rankings pretty tedious.

Though they are not an unalloyed good, rankings have their benefits.  They allow people to compare the inputs, outputs, and (if you’re lucky) processes and outcomes at various institutions.  Really good rankings – such as, for instance, the ones put out by CHE in Germany – even disaggregate data down to the departmental level so you can make actual apples-to-apples  comparisons by institution.

But to the extent that rankings are capturing “real” phenomena, is it realistic to think that they change every year?  Take the Academic Ranking of World Universities (ARWU), produced annually by Shanghai Jiao Tong University (full disclosure: I sit on the ARWU’s advisory board).   Those rankings, which eschew any kind of reputational surveys, and look purely at various scholarly outputs and prizes, barely move at all.  If memory serves, in the ten years since it launched, the top 50 has only had 52 institutions, and movement within the 50 has been minimal.  This is about right: changes in relative position among truly elite universities can take decades, if not centuries.

On the other hand, if you look at the Times World Reputation Rankings (found here), you’ll see that, in fact, only the position of the top 6 or so is genuinely secure.  Below about tenth position, everyone else is packed so closely together that changes in rank order are basically guaranteed, especially if the geographic origin of the survey sample were to change somewhat.  How, for instance, did UCLA move from 12th in the world to 9th overall in the THE rankings between 2011 and 2012 at the exact moment the California legislature was slashing its budget to ribbons?  Was it because of extraordinary new efforts by its faculty, or was it just a quirk of the survey sample?  And if it’s the latter, why should anyone pay attention to this ranking?

This is the paradox of rankings: the more important the thing you’re measuring, the less useful it is to measure it on an annual basis.  A reputation ranking done every five years might, over time, track some significant and meaningful changes in the global academic pecking order.  In an annual ranking, however, most changes are going to be the result of very small fluctuations or methodological quirks.  News coverage driven by those kinds of things is going to be inherently trivial.

Top100WUR2013

The real issues to ponder are not relative placement in the ranking and how the position of universities has changed, but instead why this ranking was created in the first place and whose interests it serves.

Kris Olds

About these ads

3 thoughts on “On the illogics of the Times Higher Education World Reputation Rankings (2013)

  1. I stopped reading the Times Higher Education Rankings years ago. The most interesting thing for me was always the instability of the rankings. The Top 50 largely remained predictable from year to year, but beyond that, there are wild swings over even short periods of time. Considering the complex methodology and the large number of respondents, it leaves me thinking that the idea of such fine gradations of ‘reputation’ is an invention, as you put it, “to sell advertizing space”. Despite this, I still have friends who boast about the ranking of their school at some randomly picked time in the lifespan of the rankings. I think of them as just one more of those annoying sorts of things that educators have to deal with on the way to their classrooms, like budget cuts or textbook sales staff.

  2. Pingback: Ninth Level Ireland » Blog Archive » On the illogics of the Times Higher Education World Reputation Rankings (2013)

  3. I couldn’t agree more to Alex Usher’s statement about “the more important the thing you’re measuring, the less useful it is to measure it on an annual basis”. Establishing a time period other than every year for rankings to be conducted would easily justify movement and justify why the rankings are being conducted in the first place

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s