Well, an email arrived today and I just could not help myself…I clicked on the THE-QS World University Rankings 2009 links that were provided to see who received what ranking. In addition, I did a quick Google scan of news outlets and weblogs to see what spins were already underway.
The THE-QS ranking seems to have become the locomotive for the Times Higher Education, a higher education newsletter that is published in the UK once per week. In contrast to the daily Chronicle of Higher Education, and the daily Inside Higher Ed (both based in the US), the Times Higher Education seems challenged to provide quality content of some depth even on its relatively lax once per week schedule. I spent four years in the UK in the mid-1990s, and can’t help but note the decline in the quality of the coverage of UK higher education news over the last decade plus.
It seems as if the Times Higher has decided to allocate most of its efforts to promoting the creation and propagation of this global ranking scheme in contrast to providing detailed, analytical, and critical coverage of issues in the UK, let alone in the European Higher Education Area. Six steady years of rankings generate attention, advertising revenue, and enhance some aspects of power and perceived esteem. But, in the end, where is the Times Higher in analyzing the forces shaping the systems in which all of these universities are embedded, or the complex forces shaping university development strategies? Rather, we primarily seem to get increasingly thin articles, based on relatively limited original research, heaps of advertising (especially jobs), and now regular build-ups to the annual rankings frenzy. In addition, their partnership with QS Quacquarelli Symonds is leading to new regional rankings; a clear form of market-making at a new unexploited geographic scale. Of course there are some useful insights generated by rankings, but the rankings attention is arguably making the Times Higher lazier and dare I say, irresponsible, given the increasing significance of higher education to modern societies and economies.
In addition, I continue to be intrigued by how UK-based analysts and institutions seem infatuated with the term “international”, as if it necessarily means better quality than “national”. See, for example, the “international” elements of the current ranking in the figure below:
Leaving aside my problems with the limited scale of the survey numbers (9,386 academics represent the “world’s” academics?; 3,281 firm representatives represent the “world’s” employers?), and the approach to weighting, why does the proportion of “international” faculty and students necessarily enhance the quality of university life?
Some universities, especially in Australasia and the UK, seek high proportions of international students to compensate for declining levels of government support, and weak levels of extramural funding via research income (which provides streams of income via overhead charges). Thus the higher number of international students may be, in some cases, inversely related to the quality of the university or the health of the public higher education system in which the university is embedded.
In addition, in some contexts, universities are legally required to limit “non-resident” student intake given the nature of the higher education system in place. But in the metrics used here universities with the incentives and the freedom to let in large numbers of foreign students , for reasons other than the quality of said students, are rewarded with a higher rank.
The discourse of “international” is elevated here, much like it was in the last Research Assessment Exercise (RAE) in the UK, with “international” codeword for higher quality. But international is just that – international – and it means nothing more than that unless we assess how good they (international students and faculty) are, what they contribute to the educational experience, and what lasting impacts they generate.
In any case, the THE-QS rankings are out. The relative position of universities in the rankings will be debated about, and used to provide legitimacy for new or previously unrecognized claims. But it’s really the methodology that needs to be unpacked, as well as the nature and logics of the rankers, versus just the institutions that are being ranked.