University rankings: deliberations and future directions

I attended a conference (the Worldwide Universities Network-organised Realizing the Global University, with a small pre-event workshop) and an Academic Cooperation Association-organised workshop (Partners and Competitors: Analysing the EU-US Higher Education Relationship) last week. Both events were well run and fascinating. I’ll be highlighting some key themes and debates that emerged in them throughout several entries in GlobalHigherEd over the next two weeks.

top10cites.jpg

One theme that garnered a significant amount of attention in both places was the ranking of universities (e.g., see one table here from the recent Times Higher Education Supplement-QS ranking that was published a few weeks ago). In both London and Brussels stakeholders of all sorts spoke out, in mainly negative tones, about the impacts of ranking schemes. They highlighted all of the usual critiques that have emerged over the last several years; critiques that are captured in the:

Suffice it to say everyone is “troubled by” (“detests”, “rejects”, “can’t stand”, “loathes”, “abhors”, etc…) ranking schemes but at the same time the schemes are used when seen fit, which usually means by relatively highly ranked institutions and systems to legitimize their standing in the global higher ed world, or to (e.g., see the case of Malaysia) flog the politicians and senior officials governing higher education systems and/or universities.

If ranking schemes are here to stay, as they seem to be (despite the Vice-Chancellor of the University of Bristol emphasizing in London that “we only have ourselves to blame”), four themes emerged as to where the global higher ed world might be heading with respect to rankings:

iheprankings.jpg(1) Critique and reformulation. If rankings schemes are here to stay, as credit ratings agencies’ (e.g., Standard & Poor’s) products also are, then the schemes need to be more effectively and forcefully critiqued with an eye to the reformulation of existing methodologies. The Higher Education Funding Council for England (HEFCE), for example, is conducting research on ranking schemes, with a large report due to be released in February 2008. This comes on the back of the Institute for Higher Education Policy’s large “multi-year project to examine the ways in which college and university ranking systems influence decision making at the institutional and government policy levels”, and a multi-phase study by the OECD on the impact of rankings in select countries (see Phase I results here). On a related note, the three-year old Faculty Scholarly Productivity Index is continually being developed in response to critiques, though I also know of many faculty and administrators who think it is beyond repair.

(2) Extending the power and focus of rankings. This week’s Economist notes that the OECD is developing a plan for a January 2008 meeting of member education ministers where they will seek approval to “[L]ook at the end result—how much knowledge is really being imparted”. What this means, in the words of Andreas Schleicher, the OECD’s head of education research, is that rather “than assuming that because a university spends more it must be better, or using other proxy measures for quality, we will look at learning outcomes”. The article notes that the first rankings should be out in 2010, and that:

[t]he task the OECD has set itself is formidable. In many subjects, such as literature and history, the syllabus varies hugely from one country, and even one campus, to another. But OECD researchers think that problem can be overcome by concentrating on the transferable skills that employers value, such as critical thinking and analysis, and testing subject knowledge only in fields like economics and engineering, with a big common core.

Moreover, says Mr Schleicher, it is a job worth doing. Today’s rankings, he believes, do not help governments assess whether they get a return on the money they give universities to teach their undergraduates. Students overlook second-rank institutions in favour of big names, even though the less grand may be better at teaching. Worst of all, ranking by reputation allows famous places to coast along, while making life hard for feisty upstarts. “We will not be reflecting a university’s history,” says Mr Schleicher, “but asking: what is a global employer looking for?” A fair question, even if not every single student’s destiny is to work for a multinational firm.

Leaving aside the complexities and politics of this initiative, the OECD is, yet again, setting the agenda for the global higher ed world.

apollofinancials.jpg(3) Blissful ignorance. The WUN event had a variety of speakers from the private for profit world, including Jorge Klor de Alva, Senior Vice-President, Academic Excellence and Director of University of Phoenix National Resource Center. The University of Phoenix, for those of you who don’t know, is part of the Apollo Group, has over 250,000 students, and is highly profitable with global ambitions. I attended a variety of sessions where people like Klor de Alva spoke and they could really care less about ranking schemes for their target “market” is a “non-traditional” one that tends not to matter (to date) to the rankers. Revenue, operating margin and income, and net income (e.g., see Apollo’s financials from their 2006 Annual Report to the left), and the views of Wall Street analysts (but not the esteem of the intelligentsia), are what matter instead for these type of players.

(4) Performance indicators for “ordinary” universities. Several speakers and commentators suggested that the existing ranking schemes were frustrating to observe from the perspective of universities not ‘on the map’, especially if they would realistically never get on the map. Alternative schemes were discussed, including performance indicators that reflected the capacity of universities to meet local and regionally-specific needs; needs that are often ignored by highly ranked universities or the institutions developing the ranking methodologies. Thus a university could feel better or worse depending on how it does over time in the performance rankings. This perspective is akin to that put forward by Jennifer Robinson in her brilliant critique of the global cities ranking schemes that exist. Robinson’s book, Ordinary Cities: Between Modernity and Development (Routledge, 2006) is well worth reading if this is your take on ranking schemes.

The future directions that ranking schemes will take are uncertain, though what is certain is that when the OECD and major funding councils start to get involved then the politics of university rankings cannot help but heat up even more. This said presence and voice regarding the (re)shaping of schemes with distributional impacts always needs to be viewed in conjunction with attention to absence and silence.

Kris Olds

4 thoughts on “University rankings: deliberations and future directions

  1. Pingback: Global university rankings 2007: interview with Simon Marginson « GlobalHigherEd

  2. Pingback: The Global Colloquium of University Presidents: Events for Solutions? « GlobalHigherEd

  3. Pingback: Benchmarking ‘the international student experience’ « GlobalHigherEd

  4. Pingback: New 2008 Shanghai rankings, by rankers who also certify rankers « GlobalHigherEd

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s