On the illogics of the Times Higher Education World Reputation Rankings (2013)

Note: you can link here for the Inside Higher Ed version of the same entry.

~~~~~~~~

Amidst all the hype and media coverage related to the just released Times Higher Education World Reputation Rankings (2013), it’s worth reflecting on just how small of a proportion of the world’s universities are captured in this exercise (see below). As I noted last November, the term ‘world university rankings’ does not reflect the reality of the exercise the rankers are engaged in; they only focus on a minuscule corner of the institutional ecosystem of the world’s universities.

The firms associated with rankings have normalized the temporal cycle of rankings despite this being an illogical exercise (unless you are interested in selling advertising space in a magazine and on a website).  As Alex Usher pointed out earlier today in ‘The Paradox of University Rankings‘ (and I quote in full):

By the time you read this, the Times Higher Education’s annual Reputation Rankings will be out, and will be the subject of much discussion on Twitter and the Interwebs and such.  Much as I enjoy most of what Phil Baty and the THE do, I find the hype around these rankings pretty tedious.

Though they are not an unalloyed good, rankings have their benefits.  They allow people to compare the inputs, outputs, and (if you’re lucky) processes and outcomes at various institutions.  Really good rankings – such as, for instance, the ones put out by CHE in Germany – even disaggregate data down to the departmental level so you can make actual apples-to-apples  comparisons by institution.

But to the extent that rankings are capturing “real” phenomena, is it realistic to think that they change every year?  Take the Academic Ranking of World Universities (ARWU), produced annually by Shanghai Jiao Tong University (full disclosure: I sit on the ARWU’s advisory board).   Those rankings, which eschew any kind of reputational surveys, and look purely at various scholarly outputs and prizes, barely move at all.  If memory serves, in the ten years since it launched, the top 50 has only had 52 institutions, and movement within the 50 has been minimal.  This is about right: changes in relative position among truly elite universities can take decades, if not centuries.

On the other hand, if you look at the Times World Reputation Rankings (found here), you’ll see that, in fact, only the position of the top 6 or so is genuinely secure.  Below about tenth position, everyone else is packed so closely together that changes in rank order are basically guaranteed, especially if the geographic origin of the survey sample were to change somewhat.  How, for instance, did UCLA move from 12th in the world to 9th overall in the THE rankings between 2011 and 2012 at the exact moment the California legislature was slashing its budget to ribbons?  Was it because of extraordinary new efforts by its faculty, or was it just a quirk of the survey sample?  And if it’s the latter, why should anyone pay attention to this ranking?

This is the paradox of rankings: the more important the thing you’re measuring, the less useful it is to measure it on an annual basis.  A reputation ranking done every five years might, over time, track some significant and meaningful changes in the global academic pecking order.  In an annual ranking, however, most changes are going to be the result of very small fluctuations or methodological quirks.  News coverage driven by those kinds of things is going to be inherently trivial.

Top100WUR2013

The real issues to ponder are not relative placement in the ranking and how the position of universities has changed, but instead why this ranking was created in the first place and whose interests it serves.

Kris Olds

World University Rankings — Time for a Name Change?

I’ve often wondered if the term ‘World University Rankings’ — the one deployed by the firm QS in its QS World University Rankings®, or TSL Education Ltd along with Thomson Reuters, in their Times Higher Education World University Rankings, is an accurate and indeed ethical one to use.

My concern over the term was heightened during visit to Jamaica last week where I attended the Association of Commonwealth Universities (ACU) Conference of Executive Heads. I was invited by the ACU, the world’s oldest international consortia with 500+ member institutions in 37 countries, to engage in a debate about rankings with Ms. Zia Batool (Director General, Quality Assurance and Statistics, Higher Education Commission, Pakistan) and Mr. Phil Baty (Editor, Times Higher Education). Link here for a copy of the conference agenda. The event was very well organized, and Professor Nigel Harris, Chair of the ACU Council and Vice Chancellor of the University of the West Indies, was a wonderful host.

My concern about the term ‘World University Rankings’ relates to the very small number of universities that are ranked relative to the total number of universities around the world that have combined research and teaching mandates. World University Rankings is a term that implies there is a unified field of universities that can be legitimately compared and ranked in an ordinal hierarchical fashion on the basis of some common metrics.

The words ‘World’ + ‘University’ implies that all of the universities scattered across the world are up for consideration, and that they can and will be ranked. And as the online Merriam-Webster Dictionary defines the word, ‘rank’ means:

2 a :relative standing or position
2 b : a degree or position of dignity, eminence, or excellence : distinction <soon took rank as a leading attorney — J. D. Hicks>

2 c : high social position <the privileges of rank>
2 d : a grade of official standing in a hierarchy
3: an orderly arrangement : formation
4 : an aggregate of individuals classed together —usually used in plural
5 : the order according to some statistical characteristic (as the score on a test)

Even more than the term ‘World Class Cities,’ the term World University Rankings is inclusive in symbolism, implying that any student, staff or faculty member from any university in any continent could exam these rankings online, or in the glossy magazine we received via Times Higher Education, and cross one’s fingers that ‘my’ or ‘our’ university might be in the Top 200 or Top 400. But look at the chances.

Alas, the vast majority of the world’s faculty, students and staff feel quickly depressed, dejected, unhappy, and sometimes concerned, when World University Ranking outcomes are examined. Students ask university leaders “what’s wrong with our university? Why are we not in the world university rankings?” Expectations spurred on by the term are dashed year after year. This might not be such a problem were it not for the fact that politicians and government officials in ministries of higher education, or indeed in prime ministerial offices, frequently react the same way.

But should they be feeling like they were considered and then rejected? No.

First, there are vast structural differences in the world of higher education related to scales of material resources, human resources (e.g., No. 1 Caltech’s student-faculty ratio is 3-1!), access to the world’s information and knowledge banks (e.g., via library data bases), missions (including the mandate to serve the local region, build nations, serve the poor, present minimal access access hurdles), etc. Language matters too for there is an undeniable global political and cultural economy to the world’s publication outlets (see ‘Visualizing the uneven geographies of knowledge production and circulation‘). These structural differences exist and cannot be wished away or ignored.

Second, it is worth reminding consumers of World University Rankers that these analytical devices are being produced by private sector firms based in cities like London whose core mission is to monetize the data they acquire (via universities themselves for free, as well as other sources) so as to generate a profit. Is profit trumping ethics? Do they really believe it is appropriate to use a term that implies a unified field of universities can be legitimately compared and ranked in an ordinal hierarchical fashion?

Is there an alternative term to World University Rankings that would better reflect the realities of the very uneven global landscape of higher education and research? Rankings and benchmarking are here to stay, but surely there must be a better way of representing what it really going on than implying everyone was considered, and 96-98% rejected. And let’s not pretend a discussion of methodology via footnotes, or a few methods-oriented articles in the rankings special issue, gets the point across.

The rankers out there owe it to the world’s universities (made up of millions of committed and sincere students, faculty, and staff) to convey who is really in the field of comparison. The term World University Rankings should be reconsidered, and a more accurate alternative should be utilized: this is one way corporate social responsibility is practiced in the digital age.

Kris Olds

Towards a Global Common Data Set for World University Rankers

Last week marked another burst of developments in the world university rankings sector, including two ‘under 50’ rankings. More specifically:

A coincidence? Very unlikely. But who was first with the idea, and why would the other ranker time their release so closely? We don’t know for sure, but we suspect the originator of the idea was Times Higher Education (with Thomson Reuters) as their outcome was formally released second. Moreover, the data analysis phase for the production of the THE 100 Under 50 was apparently “recalibrated” whereas the QS data and methodology was the same as their regular rankings – it just sliced the data different way. But you never know, for sure, especially given Times Higher Education‘s unceremonious dumping of QS for Thomson Reuters back in 2009.

Speaking of competition and cleavages in the world university rankings world, it is noteworthy that India’s University Grants Commission announced, on the weekend, that:

Foreign universities entering into agreement with their Indian counterparts for offering twinning programmes will have to be among the global top 500.

The Indian varsities on the other hand, should have received the highest accreditation grade, according to the new set of guidelines approved by University Grants Commission today.

“The underlining objective is to ensure that only quality institutes are permitted for offering the twinning programmes to protect the interest of the students,” a source said after a meeting which cleared the regulations on twinning programmes.

They said foreign varsities entering into tie-ups with Indian partners should be ranked among the top 500 by the Times Higher Education World University Ranking or by Shanghai Jiaotong University of the top 500 universities [now deemed the Academic Ranking of World Universities].

Why does this matter? We’d argue that it is another sign of the multi-sited institutionalization of world university rankings. And institutionalization generates path dependency and normalization. When more closely tied to the logic of capital, it also generates uneven development meaning that there are always winners and losers in the process of institutionalizing a sector. In this case the world’s second most populous country, with a fast growing higher education system, will be utilizing these rankings to mediate which universities (and countries) linkages can be formed with.

Now, there are obvious pros and cons to the decision made by India’s University Grants Commission, including reducing the likelihood that ‘fly-by-night’ operations and foreign for-profits will be able to link up with Indian higher education institutions when offering international collaborative degrees. This said, the establishment of such guidelines does not necessarily mean they will be implemented. But this news item from India, related news from Denmark and the Netherlands regarding the uses of rankings to guide elements of immigration policy (see ‘What if I graduated from Amherst or ENS de Lyon…; ‘DENMARK: Linking immigration to university rankings‘), as well as the emergence of the ‘under 50’ rankings, are worth reflecting on a little more. Here are two questions we’d like to leave you with.

First, does the institutionalization of world university rankings increase the obligations of governments to analyze the nature of the rankers? As in the case of ratings agencies, we would argue more needs to be known about the rankers, including their staffing, their detailed methodologies, their strategies (including with respect to monetization), their relations with universities and government agencies, potential conflicts of interest, so on. To be sure, there are some very conscientious people working on the production and marketing of world university rankings, but these are individuals, and it is important to set the rules of the game up so that a fair and transparent system exists. After all, world university rankers contribute to the generation of outcomes yet do not have to experience the consequences of said outcomes.

Second, if government agencies are going to use such rankings to enable or inhibit international linkage formation processes, not to mention direct funding, or encourage mergers, or redefine strategy, then who should be the manager of the data that is collected? Should it solely be the rankers? We would argue that the stakes are now too high to leave the control of the data solely in the hands of the rankers, especially given that much of it is provided for free by higher education institutions in the first place. But if not these private authorities, then who else? Or, if not who else, then what else?

While we were drafting this entry on Monday morning a weblog entry by Alex Usher (of Canada’s Higher Education Strategy Associates) coincidentally generated a ‘pingback’ to an earlier entry titled ‘The Business Side of World University Rankings.’ Alex Usher’s entry (pasted in below, in full) raises an interesting question that is worth of careful consideration not just because of the idea of how the data could be more fairly stored and managed, but also because of his suggestions regarding the process to push this idea forward:

My colleague Kris Olds recently had an interesting point about the business model behind the Times Higher Education’s (THE) world university rankings. Since 2009 data collection for the rankings has been done by Thomson Reuters. This data comes from three sources. One is bibliometric analysis, which Thomson can do on the cheap because it owns the Web of Science database. The second is a reputational survey of academics. And the third is a survey of institutions, in which schools themselves provide data about a range of things, such as school size, faculty numbers, funding, etc.

Thomson gets paid for its survey work, of course. But it also gets the ability to resell this data through its consulting business. And while there’s little clamour for their reputational survey data (its usefulness is more than slightly marred by the fact that Thomson’s disclosure about the geographical distribution of its survey responses is somewhat opaque) – there is demand for access for all that data that institutional research offices are providing them.

As Kris notes, this is a great business model for Thomson. THE is just prestigious enough that institutions feel they cannot say no to requests for data, thus ensuring a steady stream of data which is both unique and – perhaps more importantly – free. But if institutions which provide data to the system want any data out of this it again, they have to pay.

(Before any of you can say it: HESA’s arrangement with the Globe and Mail is different in that nobody is providing us with any data. Institutions help us survey students and in return we provide each institution with its own results. The Thomson-THE data is more like the old Maclean’s arrangement with money-making sidebars).

There is a way to change this. In the United States, continued requests for data from institutions resulted in the creation of a Common Data Set (CDS); progress on something similar has been more halting in Canada (some provincial and regional ones exist but we aren’t yet quite there nationally). It’s probably about time that some discussions began on an international CDS. Such a data set would both encourage more transparency and accuracy in the data, and it would give institutions themselves more control over how the data was used.

The problem, though, is one of co-ordination: the difficulties of getting hundreds of institutions around the world to co-operate should not be underestimated. If a number of institutional alliances such as Universitas 21 and the Worldwide Universities Network, as well as the International Association of Universities and some key university associations were to come together, it could happen. Until then, though, Thomson is sitting on a tidy money-earner.

While you could argue about the pros and cons of the idea of creating a ‘global common data set,’ including the likelihood of one coming into place, what Alex Usher is also implying is that there is a distinct lack of governance regarding world university rankers. Why are universities so anemic when it comes to this issue, and why are higher education associations not filling the governance space neglected by key national governments and international organizations? One answer is that their own individual self-interest has them playing the game as long as they are winning. Another possible answer is that they have not thought through the consequences, or really challenged themselves to generate an alternative. Another is that the ‘institutional research’ experts (e.g., those represented by the Association for Institutional Research in the case of the US) have not focused their attention on the matter. But whatever the answer, at the very least, we think that they at least need to be posing themselves a set of questions. And if it’s not going to happen now, when will it? Only after MIT demonstrates some high profile global leadership on this issue, perhaps with Harvard, like it did with MITx and edX?

Kris Olds & Susan L. Robertson

Why now? Making markets via the THE World Reputation Rankings

The 2012 Times Higher Education (THE) World Reputation Rankings were released at 00.01 on 15 March by Times Higher Education via its website. It was intensely promoted via Twitter by the ‘Energizer Bunny’ of rankings, Phil Baty, and will be circulated in hard copy format to the magazine’s subscribers.

As someone who thinks there are more cons than pros related to the rankings phenomenon, I could not resist examining the outcome, of course! See below and to the right for a screen grab of the Top 20, with Harvard demolishing the others in the reputation standings.

I do have to give Phil Baty and his colleagues at Times Higher Education and Thomson Reuters credit for enhancing the reputation rankings methodology. Each year their methodology gets better and better.

But, and this is a big but, I have to ask myself why is the reputation ranking coming out on 15 March 2012 when the when the survey was distributed in April/May 2011 and when the data was used in the 2011 World University Rankings, which were released in October 2011? It is not like the reputation outcome presented here is complex. The timing makes no sense, whatsoever, from an analytical angle.

However, if we think about the business of rankings, versus analytical cum methodological questions, the release of the ‘Reputation Rankings’ makes absolute sense.

First, the release of the reputation rankings now keeps the rankings agenda, and Times Higher Education/Thomson Reuters, elevated in both higher education and mass media outlets. The media coverage unfolding as you read this particular entry would not be emerging if the reputation rankings were bundled into the general World University Rankings that were released back in October. It is important to note that QS has adopted the same ‘drip drip’ approach with the release of field-specific ranking outcomes, regional ranking outcomes, etc. A single annual blast in today’s ‘attention economy’ is never enough for world university rankers.

Second, and on a related note, the British Council’s Going Global 2012 conference is being held in London from 13-15 March. As the British Council put it:

More than five hundred university Presidents, Vice-Chancellors and sector leaders will be among the 1300 delegates to the British Council’s ‘Going Global 2012’ conference in March.

The conference will be the biggest ever gathering of higher education leaders. More than 80 countries will be represented, as leaders from government, academia and industry debate a new vision of international education for the 21st century.

The Times Higher Education magazine is released every Thursday (so 15 March this week), and so this event provides the firms of TSL Education Ltd., and Thomson Reuters with a captive audience of ‘movers and shakers’ for their products, and associated advertising. Times Higher Education is also an official media partner for Going Global 2012.

Make no mistake about it – there is an economic logic to releasing the reputation rankings today, and this trumps an analytical logic that should have led Times Higher Education to release the reputation outcome back in October so we could all better understand the world university ranking outcome and methodology.

More broadly, there really is no logic to the annual cycle of world rankings; if there were, funding councils worldwide would benchmark annually. But there is a clear business logic to normalizing the annual cycle of world university rankings, and this has indeed become the ‘new normal.’ But even this is not enough. Much like the development and marketing of running shoes, iPods, and fashion accessories, the informal benchmarking that has always gone on in academia has become formalized, commercialized, and splintered into distinct and constantly emerging products.

In the end, it is worth reflecting if such rankings are improving learning and research outcomes, as well as institutional innovation. And it is worth asking if the firms behind such rankings are themselves as open and transparent about their practices and agendas as they expect their research subjects (i.e. universities) to be.

Now back to those rankings. Congrats, Harvard!  But more importantly, I wonder if UW-Madison managed to beat Michigan…….oh oh.

Kris Olds

The 2010 THE World University Rankings, powered by Thomson Reuters

The new 2010 Times Higher Education (THE) World University Rankings issue has just been released and we will see, no doubt, plenty of discussions and debate about the outcome. Like them or not, rankings are here to stay and the battle is now on to shape their methodologies, their frequency, the level of detail they freely provide to ranked universities and the public, their oversight (and perhaps governance?), their conceptualization, and so on.

Leaving aside the ranking outcome (the top 30, from a screen grab of the top 200, is pasted in below), it worth noting that this new rankings scheme has been produced with the analytic insights, power, and savvy, of Thomson Reuters, a company with 2009 revenue of US $12.9 billion and “over 55,000 employees in more than 100 countries”.

As discussed on GlobalHigherEd before:

Thomson Reuters is a private global information services firm, and a highly respected one at that.  Apart from ‘deep pockets’, they have knowledgeable staff, and a not insignificant number of them. For example, on 14 September Phil Baty, of Times Higher Education sent out this fact via their Twitter feed:

2 days to #THEWUR. Fact: Thomson Reuters involved more than 100 staff members in its global profiles project, which fuels the rankings

The incorporation of Thomson Reuters into the rankings games by Times Higher Education was a strategically smart move for this media company for it arguably (a) enhances their capacity (in principle) to improve ranking methodology and implementation, and (b) improves the respect the ranking exercise is likely to get in many quarters. Thomson Reuters is, thus, an analytical-cum-legitimacy vehicle of sorts.

What does this mean regarding the 2010 THE World University Rankings outcome?  Well, regardless of your views on the uses and abuses of rankings, this Thomson Reuters-backed outcome will generate more versus less attention from the media, ministries of education, and universities themselves.  And if the outcome generates any surprises, it will make it a harder job for some university leaders to provide an explanation as to why their universities have fallen down the rankings ladder.  In other words, the data will be perceived to be more reliable, and the methodology more rigorously framed and implemented, even if methodological problems continue to exist.

Yet, this is a new partnership, and a new methodology, and it should therefore be counted as YEAR 1 of the THE World University Rankings.

As the logo above makes it very clear, this is a powered (up) outcome, with power at play on more levels than one: welcome to a new ‘roll-out’ phase in the construction of what could be deemed a global ‘audit culture’.

Kris Olds

A case for free, open and timely access to world university rankings data

Well, the 2010 QS World University Rankings® were released last week and the results are continuing to generate considerable attention in the world’s media (link here for a pre-programmed Google news search of coverage).

For a range of reasons, news that QS placed Cambridge in the No. 1 spot, above Harvard, spurred on much of this media coverage (see, for example, these stories in Time, the Christian Science Monitor, and Al Jazeera). As Al Jazeera put it: “Did the Earth’s axis shift? Almost: Cambridge has nudged Harvard out of the number one spot on one major ranking system.”

Interest in the Cambridge over Harvard outcome led QS (which stands for QS Quacquarelli Symonds Ltd) to release this story (’2010 QS World University Rankings® – Cambridge strikes back’). Do note, however, that Harvard scored 99.18/100 while QS gave Cambridge 100/100 (hence the 1/2 placing). For non-rankings watchers, Harvard had been pegged as No 1 for the previous five years in rankings that QS published in association with Times Higher Education.

As the QS story notes, the economic crisis in the US, as well as the reduction of other US universities with respect to their share of “international faculty,” was the main cause of Harvard’s slide:

In the US, cost-cutting reductions in academic staff hire are reflected among many of the leading universities in this year’s rankings. Yale also dropped 19 places for international faculty, Chicago dropped 8, Caltech dropped 20, and UPenn dropped 53 places in this measure. However, despite these issues the US retains its dominance at the top of the table, with 20 of the top 50 and 31 of the top 100 universities in the overall table.

Facts like these aside, what we would like to highlight is that all of this information gathering and dissemination — both the back-end (pre-ranking) provision of the data, and the front end (post-ranking) acquisition of the data — focuses the majority of costs on the universities and the majority of benefits on the rankers.

The first cost to universities is the provision of the data. As one of us noted in a recent entry (‘Bibliometrics, global rankings, and transparency‘):

Data demands are becoming very resource consuming for universities. For example, the QS template currently being dealt with by universities around the world shows 14 main categories with sub-categories for each: all together there are 60 data fields, of which 10 are critical to the QS ranking exercise, to be launched in October 2010. Path dependency dynamics clearly exist for once the pipelines are laid the complexity of data requests can be gradually ramped up.

Keep it mind that the data is provided for free, though in the end it is a cost primarily borne by the taxpayer (for most universities are public). It is the taxpayer that pays the majority of the administrators’ salaries to enable them to compile the data and submit it to the rankers.

A second, though indirect and obscured cost, relates to the use of rankings data by credit rating agencies like Moody’s or Standards and Poors in their ratings of the credit-worthiness of universities. We’ve reported on this in earlier blog entries (e.g., ‘‘Passing judgment’: the role of credit rating agencies in the global governance of UK universities‘). Given that cost of borrowing for universities is determined by their credit-worthiness, and rankings are used in this process, we can conclude that any increase in the cost of borrowing is actually also an increase in the cost of the university to the taxpayer.

Third, rankings can alter the views of people (students, faculty, investors) making decisions about mobility or resource allocation, and these decisions inevitably generate direct financial consequences for institutions and host city-regions. Given this it seems only fair that universities and city-region development agencies should be able to freely use the base rankings data for self-reflection and strategic planning, if they so choose to.

A fourth cost is subsequent access to the data. The rankings are released via a strategically planned media blitz, as are hints at causes for shifts in the placement of universities, but access to the base data — the data our administrative colleagues in universities in Canada, the US, the UK, Sweden, etc., supplied to the rankers — is not fully enabled.  Rather, this freely provided data is used as the basis for:

the development of ancillary services and benchmarking capabilities that can be sold back to universities, funding councils, foundations, regional organizations (e.g., the European Commission which is intensely involved in benchmarking and now bankrolling a European ranking scheme), and the like.

Consider, for example, this Thomson Reuters statement on their Global Institutional Profiles Project website:

The first use of the data generated in the Global Institutional Profiles Project was to inform the Times Higher Education World University Ranking. However, there are many other services that will rely on the Profiles Project data. For example the data can be used to inform customized analytical reporting or customized data sets for a specific customer’s needs.

Thomson Reuters is developing a platform designed for easy access and interpretation of this valuable data set. The platform will combine different sets of key indicators, with peer benchmarking and visualization tools to allow users to quickly identify the key strengths of institutions across a wide variety of aspects and subjects.

Now, as QS’s Ben Sowter put it:

Despite the inevitable efforts that will be required to respond to a wide variety of enquiries from academics, journalists and institutions over the coming days there is always a deep sense of satisfaction when our results emerge. The tension visibly lifts from the team as we move into a new phase of our work – that of explaining how and why it works as opposed to actually conducting the work.

This year has been the most intense yet, we have grown the team and introduced a new system, introduced new translations of surveys, spent more time poring over the detail in the Scopus data we receive, sent out the most thorough fact files yet to universities in advance of the release – we have driven engagement to a new level – evaluating, speaking to and visiting more universities than ever.

The point we would like to make is that the process of taking “engagement to a new level” — a process coordinated and enabled by QS Quacquarelli Symonds Ltd and Times Higher Education/Thomson Reuters — is solely dependent upon universities being willing to provide data to these firms for free.

Given all of these costs, access to all of the base data beyond the simple rankings available on websites like the THE World University Rankings 2010 (due out on 16 September), or QS World University Rankings Results 2010, should be freely accessible to all.

Detailed information should also be provided about which unit, within each university, provided the rankers with the data. This would enable faculty, students and staff within ranked institutions to engage in dialogue about ranking outcomes, methodologies, and so on, should they choose to. This would also prevent confusing mix-ups such as what occurred at the University of Waterloo (UW) this week when:

UW representative Martin van Nierop said he hadn’t heard that QS had contacted the university, even though QS’s website says universities are invited to submit names of employers and professors at other universities to provide opinions. Data analysts at UW are checking the rankings to see where the information came from.

And access to this data should be provided on a timely basis, as in exactly when the rankings are released to the media and the general public.

In closing, we are making a case for free, open and timely access to all world university rankings data from January 2011, ideally on a voluntary basis. Alternative mechanisms, including intergovernmental agreements in the context of the next Global Bologna Policy Forum (in 2012), could also facilitate such an outcome.

If we have learned anything to date about the open access debate, and ‘climategate’, greater transparency helps everyone — the rankers (who will get more informed and timely feedback about their adopted methodologies), universities (faculty, students & staff), scholars and students interested in the nature of ranking methodologies, government ministries and departments, and the taxpayers who support universities (and hence the rankers).

Inspiration for this case comes from many people, as well as the open access agenda that is partly driven on the principle that taxpayer funded research generates research outcomes that society should have free and open access to, and in a timely fashion.  Surely this open access principle applies just as well to university rankings data!

Another reason society deserves to have free, open and timely access to the data is that a change in practices will shed light on how the organizations ranking universities implement their methodologies; methodologies that are ever changing (and hence more open to error).

Finer-grained access to the data would enable us to check out exactly why, for example, Harvard deserved a 99.18/100 while Cambridge was allocated a 100/100. As professors who mark student papers, outcomes this close lead us to cross-check the data, lest we subtly favour one student over another for X, Y or Z reasons. And cross-checking is even more important given that ranking is a highly mediatized phenomenon, as is clearly evident this week betwixt and between releases of the hyper-competitive QS vs THE world university rankings.

Free, open and timely access to the world university rankings data is arguably a win-win-win scenario, though it will admittedly rebalance the current focus of the majority of the costs on the universities, and the majority of the benefits on the rankers. Yet it is in the interest of the world’s universities, and the taxpayers who support these universities, for this to happen.

Kris Olds & Susan Robertson

Bibliometrics, global rankings, and transparency

Why do we care so much about the actual and potential uses of bibliometrics (“the generic term for data about publications,” according to the OECD), and world university ranking methodologies, but care so little about the private sector firms, and their inter-firm relations, that drive the bibliometrics/global rankings agenda forward?

This question came to mind when I was reading the 17 June 2010 issue of Nature magazine, which includes a detailed assessment of various aspects of bibliometrics, including the value of “science metrics” to assess aspects of the impact of research output (e.g., publications) as well as “individual scientific achievement”.

The Nature special issue, especially Richard Van Noorden’s survey on the “rapidly evolving ecosystem” of [biblio]metrics, is well worth a read. Even though bibliometrics can be a problematic and fraught dimension of academic life, they are rapidly becoming an accepted dimension of the governance (broadly defined) of higher education and research. Bibliometrics are generating a diverse and increasingly deep impact regarding the governance process at a range of scales, from the individual (a key focus of the Nature special issue) through to the unit/department, the university, the discipline/field, the national, the regional, and the global.

Now while the development process of this “eco-system” is rapidly changing, and a plethora of innovations are occurring regarding how different disciplines/fields should or should not utilize bibliometrics to better understand the nature and impact of knowledge production and dissemination, it is interesting to stand back and think about the non-state actors producing, for profit, this form of technology that meshes remarkably well with our contemporary audit culture.

In today’s entry, I’ve got two main points to make, before concluding with some questions to consider.

First, it seems to me that there is a disproportionate amount of research being conducted on the uses and abuses of metrics in contrast to research on who the producers of these metrics are, how these firms and their inter-firm relations operate, and how they attempt to influence the nature of academic practice around the world.

Now, I am not seeking to imply that firms such as Elsevier (producer of Scopus), Thomson Reuters (producer of the ISI Web of Knowledge), and Google (producer of Google Scholar), are necessarily generating negative impacts (see, for example, ‘Regional content expansion in Web of Science®: opening borders to exploration’, a good news news story from Thomson Reuters that we happily sought out), but I want to make the point that there is a glaring disjuncture between the volume of research conducted on bibliometrics versus research on these firms (the bibliometricians), and how these technologies are brought to life and to market. For example, a search of Thomson Reuter’s ISI Web of Knowledge for terms like Scopus, Thomson Reuters, Web of Science and bibliometrics generates a nearly endless list of articles comparing the main data bases, the innovations associated with them, and so on, but amazingly little research on Elsevier or Thomson Reuters (i.e. the firms).  From thick to thin, indeed, and somewhat analogous to the lack of substantial research available on ratings agencies such as Moody’s or Standard and Poor’s.

Second, and on a related note, the role of firms such as Elsevier and Thomson Reuters, not to mention QS Quacquarelli Symonds Ltd, and TSL Education Ltd, in fueling the global rankings phenomenon has received remarkably little attention in contrast to vigorous debates about methodologies. For example, the four main global ranking schemes, past and present:

all draw from the databases provided by Thomson Reuters and Elsevier.

One of the interesting aspects of the involvement of these firms with the rankings phenomenon is that they have helped to create a normalized expectation that rankings happen once per year, even though there is no clear (and certainly not stated) logic for such a frequency. Why not every 3-4 years, for example, perhaps in alignment with the World Cup or the Olympics? I can understand why rankings have to happen more frequently than the US’ long-delayed National Research Council (NRC) scheme, and they certainly need to happen more frequently than the years France wins the World Cup championship title (sorry…) but why rank every single year?

But, let’s think about this issue with the firms in mind versus the pros and cons of the methodologies in mind.

From a firm perspective, the annual cycle arguably needs to become normalized for it is a mechanism to extract freely provided data out of universities. This data is clearly used to rank but is also used to feed into the development of ancillary services and benchmarking capabilities that can be sold back to universities, funding councils, foundations, regional organizations (e.g., the European Commission which is intensely involved in benchmarking and now bankrolling a European ranking scheme), and the like.

QS Quacquarelli Symonds Ltd, for example, was marketing such services (see an extract, above, from a brochure) at their stand at the recent NAFSA conference in Kansas City, while Thomson Reuters has been busy developing what they deem the Global Institutional Profiles Project. This latter project is being spearheaded by Jonathon Adams, a former Leeds University staff member who established a private firm (Evidence Ltd) in the early 1990s that rode the UK’s Research Assessment Excellence (RAE) and European ERA waves before being acquired by Thomson Reuters in January 2009.

Sophisticated on-line data entry portals (see a screen grab of one above) are also being created. These portals build a free-flow (at least one one-way) pipeline between the administrative offices of hundreds of universities around the world and the firms doing the ranking.

Data demands are becoming very resource consuming for universities. For example, the QS template currently being dealt with by universities around the world shows 14 main categories with sub-categories for each: all together there are 60 data fields, of which 10 are critical to the QS ranking exercise, to be launched in October 2010. Path dependency dynamics clearly exist for once the pipelines are laid the complexity of data requests can be gradually ramped up.

A key objective, then, seems to involve using annual global rankings to update fee-generating databases, not to mention boost intra-firm knowledge bases and capabilities (for consultancies), all operational at the global scale.

In closing, is the posited disjuncture between research on bibliometrics vs research on bibliometricians and the information service firms these units are embedded within worth noting and doing something about?

Second, what is the rationale for annual rankings versus a more measured rankings window, in a temporal sense? Indeed why not synchronize all global rankings to specific years (e.g., 2010, 2014, 2018) so as to reduce strains on universities vis a vis the provision of data, and enable timely comparisons between competing schemes. A more measured pace would arguably reflect the actual pace of change within our higher education institutions versus the needs of these private firms.

And third, are firms like Thomson Reuters and Elsevier, as well as their partners (esp., QS Quacquarelli Symonds Ltd and TSL Education Ltd), being as transparent as they should be about the nature of their operations? Perhaps it would be useful to have accessible disclosures/discussions about:

  • What happens with all of the data that universities freely provide?
  • What is stipulated in the contracts between teams of rankers (e.g., Times Higher Education and Thomson Reuters)?
  • What rights do universities have regarding the open examination and use of all of the data and associated analyses created on the basis of the data universities originally provided?
  • Who should be governing, or at least observing, the relationship between these firms and the world’s universities? Is this relationship best continued on a bilateral firm to university basis? Or is the current approach inadequate? If it is perceived to be inadequate, should other types of actors be brought into the picture at the national scale (e.g., the US Department of Education or national associations of universities), the regional-scale (e.g., the European University Association), and/or the global scale (e.g., the International Association of Universities)?

In short, is it not time that the transparency agenda the world’s universities are being subjected to also be applied to the private sector firms that are driving the bibliometrics/global rankings agenda forward?

Kris Olds

CHERPA-network based in Europe wins tender to develop alternative global ranking of universities

rankings 4

Finally the decision on who has won the European Commission’s million euro tender – to develop and test a  global ranking of universities – has been announced.

The successful bid – the CHERPA network (or the Consortium for Higher Education and Research Performance Assessment), is charged with developing a ranking system to overcome what is regarded by the European Commission as the limitations of the Shanghai Jiao Tong and the QS-Times Higher Education schemes. The  final product is to be launched in 2011.

CHERPA is comprised of a consortium of leading institutions in the field within Europe; all have been developing and offering rather different approaches to ranking over the past few years (see our earlier stories here, here and  here for some of the potential contenders):

Will this new European Commission driven initiative set the proverbial European cat amongst the Transatlantic alliance pigeons?  rankings 1

As we have noted in earlier commentary on university rankings, the different approaches tip the rankings playing field in the direction of different interests. Much to the chagrin of the continental Europeans, the high status US universities do well on the Shanghai Jiao Tong University Ranking, whilst Britain’s QS-Times Higher Education tends to see UK universities feature more prominently.

CHERPA will develop a design that follows the so called ‘Berlin Principles on the ranking of higher education institutions‘. These principles stress the need to take into account the linguistic, cultural and historical contexts of the educational systems into account [this fact is something of an irony for those watchers following UK higher education developments last week following a Cabinet reshuffle - where reference to 'universities' in the departmental name was dropped.  The two year old Department for Innovation, Universities and Skills has now been abandoned in favor of a mega-Department for Business, Innovation and Skills! (read more here)].

According to one of the Consortium members website -  CHE:

The basic approach underlying the project is to compare only institutions which are similar and comparable in terms of their missions and structures. Therefore the project is closely linked to the idea of a European classification (“mapping”) of higher education institutions developed by CHEPS. The feasibility study will include focused rankings on particular aspects of higher education at the institutional level (e.g., internationalization and regional engagement) on the one hand, and two field-based rankings for business and engineering programmes on the other hand.

The field-based rankings will each focus on a particular type of institution and will develop and test a set of indicators appropriate to these institutions. The rankings will be multi-dimensional and will – like the CHE ranking – use a grouping approach rather than simplistic league tables. In contrast to existing global rankings, the design will compare not only the research performance of institutions but will include teaching & learning as well as other aspects of university performance.

The different rankings will be targeted at different stakeholders: They will support decision-making in universities and especially better informed study decisions by students. Rankings that create transparency for prospective students should promote access to higher education.

The University World News, in their report out today on the announcement, notes:

Testing will take place next year and must include a representative sample of at least 150 institutions with different missions in and outside Europe. At least six institutions should be drawn from the six large EU member states, one to three from the other 21, plus 25 institutions in North America, 25 in Asia and three in Australia.

There are multiple logics and politics at play here. On the one hand, a European ranking system may well give the European Commission more HE  governance capacity across Europe, strengthening its steering over national systems in areas like ‘internationalization’ and ‘regional engagement’ – two key areas that have been identified for work to be undertaken by CHERPA.

On the other hand, this new European ranking  system — when realized — might also appeal to countries in Latin America, Africa and Asia who currently do not feature in any significant way in the two dominant systems. Like the Bologna Process, the CHERPA ranking system might well find itself generating ‘echoes’ around the globe.

Or, will regions around the world prefer to develop and promote their own niche ranking systems, elements of which were evident in the QS.com Asia ranking that was recently launched.  Whatever the outcome, as we have observed before, there is a thickening industry with profits to be had on this aspect of the emerging global higher education landscape.

Susan Robertson

QS.com Asian University Rankings: niches within niches…within…

QS Asia 3Today, for the first time, the QS Intelligence Unit published their list of the top 100 Asian universities in their QS.com Asian University Rankings.

There is little doubt that the top performing universities have already added this latest branding to their websites, or that Hong Kong SAR will have proudly announced it has three universities in the top 5 while Japan has 2. QS Asia 2

QS.com Asian University Rankings is a spin-out from the QS World University Rankings published since 2005.  Last year, when the 2008 QS World University Rankings was launched, GlobalHigherEd posted an entry asking:  “Was this a niche industry in formation?”  This was in reference to strict copyright rules invoked – that ‘the list’ of decreasing ‘worldclassness’ could not be displayed, retransmitted, published or broadcast – as well as acknowledgment that rankings and associated activities can enable the building of firms such as QS Quacquarelli Symonds Ltd.

Seems like there are ‘niches within niches within….niches’ emerging in this game of deepening and extending the status economy in global higher education.  According to the QS Intelligence website:

Interest in rankings amongst Asian institutions is amongst the strongest in the world – leading to Asia being the first of a number of regional exercises QS plans to initiate.

The narrower the geographic focus of a ranking, the richer the available data can potentially be – the US News & World Report draws on 18 indicators, the Joong Ang Ilbo ranking in Korea on over 30. It is both appropriate and crucial then that the range of indicators used at a regional level differs from that used globally.

The objectives of each exercise are slightly different – whilst a global ranking seeks to identify truly world class universities, contributing to the global progress of science, society and scholarship, a regional ranking should adapt to the realities of the region in question.

Sure, the ‘regional niche’ allows QS.com to package and sell new products to Asian and other universities, as well as information to prospective students about who is regarded as ‘the best’.

However, the QS.com Asian University Rankings does more work than just that.  The ranking process and product places ‘Asian universities’ into direct competition with each other, it reinforces a very particular definition of ‘Asia’ and therefore Asian regionalism, and it services an imagined emerging Asian regional education space.

All this, whilst appearing to level the playing field by invoking regional sentiments.

Susan Robertson

CRELL: critiquing global university rankings and their methodologies

This guest entry has been kindly prepared for us by Beatrice d’Hombres and Michaela Saisana of the EU-funded Centre for Research on Lifelong Learning (CRELL) and Joint Research Centre. This entry is part of a series on the processes and politics of global university rankings (see herehere, here and here).

beatriceSince 2006, Beatrice d’Hombres has been working in the Unit of Econometrics and Statistics of the Joint Research Centre of  the European Commission. She is part of the Centre for Research on Lifelong Learning. Beatrice is an economist who completed a PhD at the University of Auvergne (France). She has a particular expertise in education economics and applied econometrics.

michaela

Michaela Saisana works for the Joint Research Centre (JRC) of the European Commission at the Unit of Econometrics and Applied Statistics. She has a PhD in Chemical Engineering and in 2004 she won the European Commission – JRC Young Scientist Prize in Statistics and Econometrics for her contribution on the robustness assessment of composite indicators and her work on sensitivity analysis.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The expansion of the access to higher education, the growing mobility of students, the need for economic rationale behind the allocation of public funds, together with the demand for higher accountability and transparency, have all contributed to raise the need for comparing university quality across countries.

The recognition of this fact has also been greatly stirred  by the publication, since 2003, of the ‘Shanghai Jiao Tong University Academic Ranking of World Universities’ (henceforth SJTU), which measures university research performance across the world. The SJTU ranking tends to reinforce the evidence that the US is well ahead of Europe in terms of cutting-edge university research.

Its rival is the ranking computed annually, since 2004, by the Times Higher Education Supplement (henceforth THES). Both these rankings are now receiving worldwide attention and constitute an occasion for national governments to comment on the relative performances of their national universities.

In France, for example, the publication of the SJTU is always associated with a surge of articles in newspapers which either bemoan  the poor performance of French universities or denounce the inadequacy of the SJTU ranking to properly assess the attractiveness of the fragmented French higher education institutions landscape (see Les Echos, 7 August 2008).

Whether the intention of the rankers or not, university rankings have followed a destiny of their own and are used by national policy makers to stimulate debates about national university systems and ultimately can lead to specific education policies orientations.

At the same time, however, these rankings are subject to a plethora of criticism. They outline that the chosen indicators are mainly based on research performance with no attempt to take into account the others missions of universities (in particular teaching), and are biased towards large, English-speaking and hard-science institutions. Whilst the limitations of the indicators underlying the THES or the SJTU rankings have been extensively discussed in the relevant literature, there has been no attempt so far to examine in depth the volatility of the university ranks to the methodological assumptions made in compiling the rankings.

crell3The purpose of the JRC/Centre for Research on Lifelong Learning (CRELL) report is to fill in this gap by quantifying how much university rankings depend on the methodology and to reveal whether the Shanghai ranking serves the purposes it is used for, and if its immediate European alternative, the British THES, can do better.

To that end, we carry out a thorough uncertainty and sensitivity analysis of the 2007 SJTU and THES rankings under a plurality of scenarios in which we activate simultaneously different sources of uncertainty. The sources cover a wide spectrum of methodological assumptions (set of selected indicators, weighting scheme, and aggregation method).

This implies that we deviate from the classic approach – also taken in the two university ranking systems – to build a composite indicator by a simple weighted summation of indicators. Subsequently, a frequency matrix of the university ranks is calculated across the different simulations. Such a multi-modeling approach and the presentation of the frequency matrix, rather than the single ranks, allows one to deal with the criticism, often made to league tables and rankings systems ,that ranks are presented as if they were calculated under conditions of certainty while this is rarely the case.  crell

The main findings of the report are the following. Both rankings are only robust in the identification of the top 15 performers on either side of the Atlantic, but unreliable on the exact ordering of all other institutes. And, even when combining all twelve indicators in a single framework, the space of the inference is too wide for about 50 universities of the 88 universities we studied and thus no meaningful rank can be estimated for those universities. Finally, the JRC report suggests that THES and SJTU rankings should be improved along two main directions:

  • first, the compilation of university rankings should always be accompanied by a robustness analysis based on a multi-modeling approach. We believe that this could constitute an additional recommendation to be added to the already 16 existing Berlin Principles;
  • second, it is necessary to revisit the set of indicators, so as to enrich it with other dimensions that are crucial to assessing university performance and which are currently missing.

Beatrice d’Hombres  and Michaela Saisana