The 2010 THE World University Rankings, powered by Thomson Reuters

The new 2010 Times Higher Education (THE) World University Rankings issue has just been released and we will see, no doubt, plenty of discussions and debate about the outcome. Like them or not, rankings are here to stay and the battle is now on to shape their methodologies, their frequency, the level of detail they freely provide to ranked universities and the public, their oversight (and perhaps governance?), their conceptualization, and so on.

Leaving aside the ranking outcome (the top 30, from a screen grab of the top 200, is pasted in below), it worth noting that this new rankings scheme has been produced with the analytic insights, power, and savvy, of Thomson Reuters, a company with 2009 revenue of US $12.9 billion and “over 55,000 employees in more than 100 countries”.

As discussed on GlobalHigherEd before:

Thomson Reuters is a private global information services firm, and a highly respected one at that.  Apart from ‘deep pockets’, they have knowledgeable staff, and a not insignificant number of them. For example, on 14 September Phil Baty, of Times Higher Education sent out this fact via their Twitter feed:

2 days to #THEWUR. Fact: Thomson Reuters involved more than 100 staff members in its global profiles project, which fuels the rankings

The incorporation of Thomson Reuters into the rankings games by Times Higher Education was a strategically smart move for this media company for it arguably (a) enhances their capacity (in principle) to improve ranking methodology and implementation, and (b) improves the respect the ranking exercise is likely to get in many quarters. Thomson Reuters is, thus, an analytical-cum-legitimacy vehicle of sorts.

What does this mean regarding the 2010 THE World University Rankings outcome?  Well, regardless of your views on the uses and abuses of rankings, this Thomson Reuters-backed outcome will generate more versus less attention from the media, ministries of education, and universities themselves.  And if the outcome generates any surprises, it will make it a harder job for some university leaders to provide an explanation as to why their universities have fallen down the rankings ladder.  In other words, the data will be perceived to be more reliable, and the methodology more rigorously framed and implemented, even if methodological problems continue to exist.

Yet, this is a new partnership, and a new methodology, and it should therefore be counted as YEAR 1 of the THE World University Rankings.

As the logo above makes it very clear, this is a powered (up) outcome, with power at play on more levels than one: welcome to a new ‘roll-out’ phase in the construction of what could be deemed a global ‘audit culture’.

Kris Olds

A case for free, open and timely access to world university rankings data

Well, the 2010 QS World University Rankings® were released last week and the results are continuing to generate considerable attention in the world’s media (link here for a pre-programmed Google news search of coverage).

For a range of reasons, news that QS placed Cambridge in the No. 1 spot, above Harvard, spurred on much of this media coverage (see, for example, these stories in Time, the Christian Science Monitor, and Al Jazeera). As Al Jazeera put it: “Did the Earth’s axis shift? Almost: Cambridge has nudged Harvard out of the number one spot on one major ranking system.”

Interest in the Cambridge over Harvard outcome led QS (which stands for QS Quacquarelli Symonds Ltd) to release this story (‘2010 QS World University Rankings® – Cambridge strikes back’). Do note, however, that Harvard scored 99.18/100 while QS gave Cambridge 100/100 (hence the 1/2 placing). For non-rankings watchers, Harvard had been pegged as No 1 for the previous five years in rankings that QS published in association with Times Higher Education.

As the QS story notes, the economic crisis in the US, as well as the reduction of other US universities with respect to their share of “international faculty,” was the main cause of Harvard’s slide:

In the US, cost-cutting reductions in academic staff hire are reflected among many of the leading universities in this year’s rankings. Yale also dropped 19 places for international faculty, Chicago dropped 8, Caltech dropped 20, and UPenn dropped 53 places in this measure. However, despite these issues the US retains its dominance at the top of the table, with 20 of the top 50 and 31 of the top 100 universities in the overall table.

Facts like these aside, what we would like to highlight is that all of this information gathering and dissemination — both the back-end (pre-ranking) provision of the data, and the front end (post-ranking) acquisition of the data — focuses the majority of costs on the universities and the majority of benefits on the rankers.

The first cost to universities is the provision of the data. As one of us noted in a recent entry (‘Bibliometrics, global rankings, and transparency‘):

Data demands are becoming very resource consuming for universities. For example, the QS template currently being dealt with by universities around the world shows 14 main categories with sub-categories for each: all together there are 60 data fields, of which 10 are critical to the QS ranking exercise, to be launched in October 2010. Path dependency dynamics clearly exist for once the pipelines are laid the complexity of data requests can be gradually ramped up.

Keep it mind that the data is provided for free, though in the end it is a cost primarily borne by the taxpayer (for most universities are public). It is the taxpayer that pays the majority of the administrators’ salaries to enable them to compile the data and submit it to the rankers.

A second, though indirect and obscured cost, relates to the use of rankings data by credit rating agencies like Moody’s or Standards and Poors in their ratings of the credit-worthiness of universities. We’ve reported on this in earlier blog entries (e.g., ‘‘Passing judgment’: the role of credit rating agencies in the global governance of UK universities‘). Given that cost of borrowing for universities is determined by their credit-worthiness, and rankings are used in this process, we can conclude that any increase in the cost of borrowing is actually also an increase in the cost of the university to the taxpayer.

Third, rankings can alter the views of people (students, faculty, investors) making decisions about mobility or resource allocation, and these decisions inevitably generate direct financial consequences for institutions and host city-regions. Given this it seems only fair that universities and city-region development agencies should be able to freely use the base rankings data for self-reflection and strategic planning, if they so choose to.

A fourth cost is subsequent access to the data. The rankings are released via a strategically planned media blitz, as are hints at causes for shifts in the placement of universities, but access to the base data — the data our administrative colleagues in universities in Canada, the US, the UK, Sweden, etc., supplied to the rankers — is not fully enabled.  Rather, this freely provided data is used as the basis for:

the development of ancillary services and benchmarking capabilities that can be sold back to universities, funding councils, foundations, regional organizations (e.g., the European Commission which is intensely involved in benchmarking and now bankrolling a European ranking scheme), and the like.

Consider, for example, this Thomson Reuters statement on their Global Institutional Profiles Project website:

The first use of the data generated in the Global Institutional Profiles Project was to inform the Times Higher Education World University Ranking. However, there are many other services that will rely on the Profiles Project data. For example the data can be used to inform customized analytical reporting or customized data sets for a specific customer’s needs.

Thomson Reuters is developing a platform designed for easy access and interpretation of this valuable data set. The platform will combine different sets of key indicators, with peer benchmarking and visualization tools to allow users to quickly identify the key strengths of institutions across a wide variety of aspects and subjects.

Now, as QS’s Ben Sowter put it:

Despite the inevitable efforts that will be required to respond to a wide variety of enquiries from academics, journalists and institutions over the coming days there is always a deep sense of satisfaction when our results emerge. The tension visibly lifts from the team as we move into a new phase of our work – that of explaining how and why it works as opposed to actually conducting the work.

This year has been the most intense yet, we have grown the team and introduced a new system, introduced new translations of surveys, spent more time poring over the detail in the Scopus data we receive, sent out the most thorough fact files yet to universities in advance of the release – we have driven engagement to a new level – evaluating, speaking to and visiting more universities than ever.

The point we would like to make is that the process of taking “engagement to a new level” — a process coordinated and enabled by QS Quacquarelli Symonds Ltd and Times Higher Education/Thomson Reuters — is solely dependent upon universities being willing to provide data to these firms for free.

Given all of these costs, access to all of the base data beyond the simple rankings available on websites like the THE World University Rankings 2010 (due out on 16 September), or QS World University Rankings Results 2010, should be freely accessible to all.

Detailed information should also be provided about which unit, within each university, provided the rankers with the data. This would enable faculty, students and staff within ranked institutions to engage in dialogue about ranking outcomes, methodologies, and so on, should they choose to. This would also prevent confusing mix-ups such as what occurred at the University of Waterloo (UW) this week when:

UW representative Martin van Nierop said he hadn’t heard that QS had contacted the university, even though QS’s website says universities are invited to submit names of employers and professors at other universities to provide opinions. Data analysts at UW are checking the rankings to see where the information came from.

And access to this data should be provided on a timely basis, as in exactly when the rankings are released to the media and the general public.

In closing, we are making a case for free, open and timely access to all world university rankings data from January 2011, ideally on a voluntary basis. Alternative mechanisms, including intergovernmental agreements in the context of the next Global Bologna Policy Forum (in 2012), could also facilitate such an outcome.

If we have learned anything to date about the open access debate, and ‘climategate’, greater transparency helps everyone — the rankers (who will get more informed and timely feedback about their adopted methodologies), universities (faculty, students & staff), scholars and students interested in the nature of ranking methodologies, government ministries and departments, and the taxpayers who support universities (and hence the rankers).

Inspiration for this case comes from many people, as well as the open access agenda that is partly driven on the principle that taxpayer funded research generates research outcomes that society should have free and open access to, and in a timely fashion.  Surely this open access principle applies just as well to university rankings data!

Another reason society deserves to have free, open and timely access to the data is that a change in practices will shed light on how the organizations ranking universities implement their methodologies; methodologies that are ever changing (and hence more open to error).

Finer-grained access to the data would enable us to check out exactly why, for example, Harvard deserved a 99.18/100 while Cambridge was allocated a 100/100. As professors who mark student papers, outcomes this close lead us to cross-check the data, lest we subtly favour one student over another for X, Y or Z reasons. And cross-checking is even more important given that ranking is a highly mediatized phenomenon, as is clearly evident this week betwixt and between releases of the hyper-competitive QS vs THE world university rankings.

Free, open and timely access to the world university rankings data is arguably a win-win-win scenario, though it will admittedly rebalance the current focus of the majority of the costs on the universities, and the majority of the benefits on the rankers. Yet it is in the interest of the world’s universities, and the taxpayers who support these universities, for this to happen.

Kris Olds & Susan Robertson

Are we witnessing the denationalization of the higher education media?

The denationalization of higher education – the process whereby developmental logics, frames, and practices, are increasingly associated with what is happening at a larger (beyond the nation) scale continues apace. As alluded to in my last two substantive entries:

this process is being shaped by new actors, new networks, new rationalities, new technologies, and new temporal rhythms. Needless to say, this development process is also generating a myriad of impacts and outcomes, some welcome, and some not.

While the denationalization process is a phenomenon that is of much interest to policy-making institutions (e.g., the OECD), foundations and funding councils, scholarly research networks, financial analysts, universities, and the like, I would argue that it is only now, at a relatively late stage in the game, that the higher education media is starting to take more systematic note of the contours of denationalization.

How is this happening? I will address this question by focusing in on recent changes in the English language higher education media in two key countries – the UK and the USA (though I recognize that University World News, described below, is not so simply placed).

From a quantitative and qualitative perspective, we are seeing rapid growth in the ostensibly ‘global’ coverage of the English-language higher education media from the mid-2000s on. While some outlets (e.g., the Chronicle of Higher Education) have had correspondents abroad since the 1970s, there are some noteworthy developments:

2004/2005

2007

  • University World News (UWN) launched in October. This outlet is the product of a network of journalists, many formally associated with THES, who were frustrated with the disconnect between the globalization of higher education and the narrow national focus of ‘niche’ higher education media outlets. As with IHE, UWN’s free digital-only mode enhances the ability of this outlet to reach a relatively wide range of people located throughout the world.

2009/2010

  • Chronicle of Higher Education launches a virtual Global edition (similar in style to the New York Times’ Global edition) in May. A new $2 million strategic plan leads to the ongoing hiring of more Washington DC-based editorial staff, more correspondents (to be based in Latin America, Asia, the Middle East and Europe), enhanced travel for US-based sectoral experts, and the establishment of a new weblog (WorldWise).
  • Inside Higher Ed announces it is hosting three new weblogs (GlobalHigherEd; University of Venus; The World View), all with substantial globally-themed coverage. Reporter staff time retuned, to a degree, to prioritize key global issues/processes/patterns. IHE forms collaborative relationship with Times Higher Education to cross-post selected articles on their respective web sites.
  • Times Higher Education (THE) teams up with Thomson Reuters to produce the Times Higher Education/Thomson Reuters World University Rankings (2010 on). THE continues to draw upon guest contributions from faculty about ‘global’ issues and developmental dynamics: this is partly an outcome of seeking to meet the needs and conceptual vocabulary of their faculty-dominated audience, while also controlling staff costs. The digital edition of THE International launched in July 2010.

From a temporal and technological perspective, it is clear that all of these outlets are ramping up their capacity to disseminate digital content, facilitate and/or shape debates, market themselves, and build relevant multi-scalar networks. For example, I can’t help but think about the differences between how I engaged with the THES (as it used to be called) as a Bristol-based reader in the first half of the 1990s and now. In the 1990s we would have friendly squabbles in the Geography tea room to get our hands on it so we could examine the jobs’ pages. Today, in 2010, THE staffers tweet (via @timeshighered and @THEworldunirank) dozens of times per day, and I can sit here in Madison WI and read the THE website, as well as THE International, the moment they are loaded up on the web.

While all of these higher education media outlets are seeking to enhance their global coverage, they are obviously approaching it in their own unique ways, reflective of their organizational structure and resources, the nature of their audiences, and the broader media and corporate contexts in which they are embedded.

In many ways, then, the higher education media are key players in the new global higher education landscape for they shape debates via what they cover and what they ignore. These media firms are also now able to position themselves on top of hundreds of non-traditional founts of information via Twitter sources, select weblogs (some of which they are adopting), state-supported news crawlers (e.g., Canada’s Manitoba International Education News; Netherland’s forthcoming NUFFICblog; the UK’s HE International Unit site and newsletter), cross-references to other media sources (e.g., they often profile relevant NY Times stories), and so on — a veritable BP oil well gusher of information about the changing higher education landscape. In doing so, the higher education media outlets are positioning themselves as funnels or channels of relevant (it is hoped) and timely information and knowledge.

What are we to make of the changes noted above?

In my biased view, these are positive changes on many levels for they are reflective of media outlets recognizing that the world is indeed changing, and that they have an obligation to profile and assist others in better understanding this emerging landscape. Of course these are private media firms that sell services and must make a profit in the end, but they are firms managed by people with a clear love for the complex worlds of higher education.

This said there are some silences, occlusions, and possible conflicts of interest, though not necessarily by design.

First, English is clearly the lingua franca associated with this new media landscape. This is not surprising, perhaps, given my selective focus and the structural forces at work, but it is worth pausing and reflecting about the implications of this linguistic bias. Concerns aside, there are no easy solutions to the hegemony of English in the global higher education media world. For example, while there is no European higher education media ‘voice’ (see ‘Where is Europe’s higher education media?‘), if one were to emerge could it realistically function in any other language than English given the diversity of languages used in the 47 member country systems making up the European Higher Education Area?

Second, these outlets, as well as many others I have not mentioned, are all grappling with the description versus analysis tension, and the causal forces versus outcomes focus tension. Light and breezy stories may capture initial interest, but in the end the forces shaping the outcomes need to be unpacked and deliberated about.

Third, the diversification strategies that these media outlets have considered, and selectively adopted, can generate potential conflicts of interest. I have a difficult time, for example, reading Washington Post-based stories about the for-profit higher education sector knowing that this newspaper is literally kept afloat by Kaplan, a major for-profit higher education firm. And insights and effort aside, can THE journalists and editors write about their own rankings, or other competitive ranking initiatives (e.g., see ‘’Serious defects’ apparent in ‘crude’ European rankings project’), with the necessary distance needed to be analytical versus boosterish? I’ll leave the ‘necessary distance’ question for others to reflect about, and assume that this is a question that the skilled professionals representing the Washington Post and the THE must be grappling with.

Finally, is it possible to provide The World View, be WorldWise, or do justice to the ‘global’, in a weblog or any media outlet? I doubt it, for we are all situated observers of the unfolding of the global higher education landscape. There is no satellite platform that is possible to stand upon, and we are all (journalists, bloggers, pundits, academics, etc.) grappling with how to make sense of the denationalizing systems we know best, not to mention the emerging systems of regional and global governance that are being constructed.

All that can be done, perhaps, is to enhance analytical capabilities, encourage the emergence of new voices, and go for it while being open and transparent about biases and agendas, blind spots and limitations.

Kris Olds

Note: my sincere thanks to the editors of the Chronicle of Higher Education, Inside Higher Ed, Times Higher Education, and University World News, for passing on their many insights via telephone and email correspondence.  And thanks to my colleagues Yi-Fu Tuan and Mary Churchill for their indirectly inspirational comments about World views this past week. Needless to say, the views expressed above are mine alone.

Bibliometrics, global rankings, and transparency

Why do we care so much about the actual and potential uses of bibliometrics (“the generic term for data about publications,” according to the OECD), and world university ranking methodologies, but care so little about the private sector firms, and their inter-firm relations, that drive the bibliometrics/global rankings agenda forward?

This question came to mind when I was reading the 17 June 2010 issue of Nature magazine, which includes a detailed assessment of various aspects of bibliometrics, including the value of “science metrics” to assess aspects of the impact of research output (e.g., publications) as well as “individual scientific achievement”.

The Nature special issue, especially Richard Van Noorden’s survey on the “rapidly evolving ecosystem” of [biblio]metrics, is well worth a read. Even though bibliometrics can be a problematic and fraught dimension of academic life, they are rapidly becoming an accepted dimension of the governance (broadly defined) of higher education and research. Bibliometrics are generating a diverse and increasingly deep impact regarding the governance process at a range of scales, from the individual (a key focus of the Nature special issue) through to the unit/department, the university, the discipline/field, the national, the regional, and the global.

Now while the development process of this “eco-system” is rapidly changing, and a plethora of innovations are occurring regarding how different disciplines/fields should or should not utilize bibliometrics to better understand the nature and impact of knowledge production and dissemination, it is interesting to stand back and think about the non-state actors producing, for profit, this form of technology that meshes remarkably well with our contemporary audit culture.

In today’s entry, I’ve got two main points to make, before concluding with some questions to consider.

First, it seems to me that there is a disproportionate amount of research being conducted on the uses and abuses of metrics in contrast to research on who the producers of these metrics are, how these firms and their inter-firm relations operate, and how they attempt to influence the nature of academic practice around the world.

Now, I am not seeking to imply that firms such as Elsevier (producer of Scopus), Thomson Reuters (producer of the ISI Web of Knowledge), and Google (producer of Google Scholar), are necessarily generating negative impacts (see, for example, ‘Regional content expansion in Web of Science®: opening borders to exploration’, a good news news story from Thomson Reuters that we happily sought out), but I want to make the point that there is a glaring disjuncture between the volume of research conducted on bibliometrics versus research on these firms (the bibliometricians), and how these technologies are brought to life and to market. For example, a search of Thomson Reuter’s ISI Web of Knowledge for terms like Scopus, Thomson Reuters, Web of Science and bibliometrics generates a nearly endless list of articles comparing the main data bases, the innovations associated with them, and so on, but amazingly little research on Elsevier or Thomson Reuters (i.e. the firms).  From thick to thin, indeed, and somewhat analogous to the lack of substantial research available on ratings agencies such as Moody’s or Standard and Poor’s.

Second, and on a related note, the role of firms such as Elsevier and Thomson Reuters, not to mention QS Quacquarelli Symonds Ltd, and TSL Education Ltd, in fueling the global rankings phenomenon has received remarkably little attention in contrast to vigorous debates about methodologies. For example, the four main global ranking schemes, past and present:

all draw from the databases provided by Thomson Reuters and Elsevier.

One of the interesting aspects of the involvement of these firms with the rankings phenomenon is that they have helped to create a normalized expectation that rankings happen once per year, even though there is no clear (and certainly not stated) logic for such a frequency. Why not every 3-4 years, for example, perhaps in alignment with the World Cup or the Olympics? I can understand why rankings have to happen more frequently than the US’ long-delayed National Research Council (NRC) scheme, and they certainly need to happen more frequently than the years France wins the World Cup championship title (sorry…) but why rank every single year?

But, let’s think about this issue with the firms in mind versus the pros and cons of the methodologies in mind.

From a firm perspective, the annual cycle arguably needs to become normalized for it is a mechanism to extract freely provided data out of universities. This data is clearly used to rank but is also used to feed into the development of ancillary services and benchmarking capabilities that can be sold back to universities, funding councils, foundations, regional organizations (e.g., the European Commission which is intensely involved in benchmarking and now bankrolling a European ranking scheme), and the like.

QS Quacquarelli Symonds Ltd, for example, was marketing such services (see an extract, above, from a brochure) at their stand at the recent NAFSA conference in Kansas City, while Thomson Reuters has been busy developing what they deem the Global Institutional Profiles Project. This latter project is being spearheaded by Jonathon Adams, a former Leeds University staff member who established a private firm (Evidence Ltd) in the early 1990s that rode the UK’s Research Assessment Excellence (RAE) and European ERA waves before being acquired by Thomson Reuters in January 2009.

Sophisticated on-line data entry portals (see a screen grab of one above) are also being created. These portals build a free-flow (at least one one-way) pipeline between the administrative offices of hundreds of universities around the world and the firms doing the ranking.

Data demands are becoming very resource consuming for universities. For example, the QS template currently being dealt with by universities around the world shows 14 main categories with sub-categories for each: all together there are 60 data fields, of which 10 are critical to the QS ranking exercise, to be launched in October 2010. Path dependency dynamics clearly exist for once the pipelines are laid the complexity of data requests can be gradually ramped up.

A key objective, then, seems to involve using annual global rankings to update fee-generating databases, not to mention boost intra-firm knowledge bases and capabilities (for consultancies), all operational at the global scale.

In closing, is the posited disjuncture between research on bibliometrics vs research on bibliometricians and the information service firms these units are embedded within worth noting and doing something about?

Second, what is the rationale for annual rankings versus a more measured rankings window, in a temporal sense? Indeed why not synchronize all global rankings to specific years (e.g., 2010, 2014, 2018) so as to reduce strains on universities vis a vis the provision of data, and enable timely comparisons between competing schemes. A more measured pace would arguably reflect the actual pace of change within our higher education institutions versus the needs of these private firms.

And third, are firms like Thomson Reuters and Elsevier, as well as their partners (esp., QS Quacquarelli Symonds Ltd and TSL Education Ltd), being as transparent as they should be about the nature of their operations? Perhaps it would be useful to have accessible disclosures/discussions about:

  • What happens with all of the data that universities freely provide?
  • What is stipulated in the contracts between teams of rankers (e.g., Times Higher Education and Thomson Reuters)?
  • What rights do universities have regarding the open examination and use of all of the data and associated analyses created on the basis of the data universities originally provided?
  • Who should be governing, or at least observing, the relationship between these firms and the world’s universities? Is this relationship best continued on a bilateral firm to university basis? Or is the current approach inadequate? If it is perceived to be inadequate, should other types of actors be brought into the picture at the national scale (e.g., the US Department of Education or national associations of universities), the regional-scale (e.g., the European University Association), and/or the global scale (e.g., the International Association of Universities)?

In short, is it not time that the transparency agenda the world’s universities are being subjected to also be applied to the private sector firms that are driving the bibliometrics/global rankings agenda forward?

Kris Olds

Developments in world institutional rankings; SCImago joins the club

Editor’s note: this guest entry was kindly written by Gavin Moodie, principal policy adviser of Griffith University in Australia.  Gavin (pictured to the right) is most interested in the relations between vocational and higher education. His book From Vocational to Higher Education: An International Perspective was published by McGraw-Hill last year. Gavin’s entry sheds light on a new ranking initiative that needs to be situated within the broad wave of contemporary rankings – and bibliometrics more generally – that are being used to analyze, legitimize, critique, promote, not to mention extract revenue from.  Our thanks to Gavin for the illuminating contribution below.

~~~~~~~~~~~~~~~~~~~~~~~~

It has been a busy time for world institutional rankings watchers recently. Shanghai Jiao Tong University’s Institute of Higher Education published its academic ranking of world universities (ARWU) for 2009. The institute’s 2009 rankings include its by now familiar ranking of 500 institutions’ overall performance and the top 100 institutions in each of five broad fields: natural sciences and mathematics, engineering/technology and computer sciences, life and agriculture sciences, clinical medicine and pharmacy, and social sciences. This year Dr. Liu and his colleagues have added rankings of the top 100 institutions in each of five subjects: mathematics, physics, chemistry, computer science and economics/business.

Times Higher Education announced that over the next few months it will develop a new method for its world university rankings which in future will be produced with Thomson Reuters. Thomson Reuters’ contribution will be guided by Jonathan Adams (Adams’ firm, Evidence Ltd, was recently acquired by Thomson Reuters).

And a new ranking has been published, SCImago institutions rankings: 2009 world report. This is a league table of research institutions by various factors derived from Scopus, the database of the huge multinational publisher Elsevier. SCImago’s institutional research rank is distinctive in including with higher education institutions government research organisations such as France’s Centre National de la Recherche Scientifique, health organisations such as hospitals, and private and other organisations. Only higher education institutions are considered here. The ranking was produced by the SCImago Research Group, a Spain-based research network “dedicated to information analysis, representation and retrieval by means of visualisation techniques”.

SCImago’s rank is very useful in not cutting off at the top 200 or 500 universities, but in including all organisations with more than 100 publications indexed in Scopus in 2007. It therefore includes 1,527 higher education institutions in 83 countries. But even so, it is highly selective, including only 16% of the world’s estimated 9,760 universities, 76% of US doctoral granting universities, 65% of UK universities and 45% of Canada’s universities. In contrast all of New Zealand’s universities and 92% of Australia’s universities are listed in SCImago’s rank. Some 38 countries have seven or more universities in the rank.

SCImago derives five measures from the Scopus database: total outputs, cites per document (which are heavily influenced by field of research as well as research quality), international collaboration, normalised Scimago journal rank and normalised citations per output. This discussion will concentrate on total outputs and normalised citations per output.

Together these measures show that countries have been following two broad paths to supporting their research universities. One group of countries in northern continental Europe around Germany have supported a reasonably even development of their research universities, while another group of countries influenced by the UK and the US have developed their research universities much more unevenly. Both seem to be successful in support research volume and quality, at least as measured by publications and citations.

Volume of publications

Because a reasonable number of countries have several higher education institutions listed in SCImago’s rank it is possible to consider countries’ performance rather than concentrate on individual institutions as the smaller ranks encourage. I do this by taking the average of the performance of each country’s universities. The first measure of interest is the number of publications each university has indexed in Scopus over the five years from 2003 to 2007, which is an indicator of the volume of research. The graph in figure 1 shows the mean number of outputs for each country’s higher education research institutions. It shows only countries which have more than six universities included in SCImago’s rank, which leaves out 44 countries and thus much of the tail in institutions’ performance.

Figure 1: mean of universities’ outputs for each country with > 6 universities ranked


These data are given in table 1. The first column gives the number of higher education institutions each country has ranked in SCImago institutions rankings (SIR): 2009 world report. The second column shows the mean number of outputs indexed in Scopus for each country’s higher education research institutions from 2003 to 2007. The next column shows the standard deviation of the number of outputs for each country’s research university.

The third column in table 1 shows the coefficient of variation, which is the standard deviation divided by the mean and multiplied by 100. This is a measure of the evenness of the distribution of outputs amongst each country’s universities. Thus, the five countries whose universities had the highest average number of outputs indexed in Scopus from 2003 to 2007 – the Netherlands, Israel, Belgium, Denmark and Sweden – also had a reasonably low coefficient of variation below 80. This indicates that research volume is spread reasonably evenly amongst those countries’ universities. In contrast, Canada which had the sixth highest average number of outputs also has a reasonably high coefficient of variation of 120, indicating an uneven distribution of outputs amongst Canada’s research universities.

The final column in table 1 shows the mean of SCImago’s international collaboration score, which is a score of the proportions of the institution’s outputs jointly authored with someone from another country. The US’ international collaboration is rather low because US authors collaborate more often with authors in other institutions within the country.

Table 1: countries with > 6 institutions ranked by institutions’ mean outputs, 2007

Source: SCImago Research Group (2009) SCImago institutions rankings (SIR): 2009 world report.

Citations per paper by field

We next examine citations per paper by field of research, which is an indicator of the quality of research. This is the ratio between the average citations per publication of an institution and the world number of citations per publication over the same time frame and subject area. SCImago says it computed this ratio using the method established by Sweden’s Karolinska Intitutet which it called the ‘Item oriented field normalized citation score average’. A score of 0.8 means the institution is cited 20% below average and 1.3 means the institution is cited 30% above average.

Figure 2 shows mean normalised citations per paper for each country’s higher education research institutions from 2003 to 2007, again showing only countries which have more than six universities included in SCImago’s rank. The graph for an indicator of research quality in figure 2 is similar in shape to the graph of research volume in figure 1.

Figure 2: mean of universities’ normalised citations per paper for each country with > 6 universities ranked

Table 2 shows countries with more than six higher education research institutions ranked by their institutions’ mean normalised citations. This measure distinguishes more sharply between institutions than volume of outputs – the coefficient of variations for countries’ mean institutions normalised citations are higher than for number of publications. Nonetheless, several countries with high mean normalised citations have an even performance amongst their universities on this measure – Switzerland, Netherlands, Sweden, Germany, Austria, France, Finland and New Zealand.

Finally, I wondered whether countries which had a reasonably even performance of their research universities by volume and quality of publications reflected a more equal society. To test this I obtained from the Central Intelligence Agency’s (2009) World Factbook the Gini index of the distribution of family income within a country. A country with a Gini index of 0 would have perfect equality in the distribution of family income whereas a country with perfect inequality in its distribution of family would have a Gini index of 100. There is a modest correlation of 0.37 between a country’s Gini index and its coefficient of variation for both publications and citations.

Table 2: countries with > 6 institutions ranked by institutions’ normalised citations per output

Sources: SCImago Research Group (2009) SCImago institutions rankings (SIR): 2009 world report; Central Intelligence Agency (2009) The world factbook.

Conclusion

SCImago’s institutions research rank is sufficiently comprehensive to support comparisons between countries’ research higher education institutions. It finds two patterns amongst countries whose research universities have a high average volume and quality of research publications. One group of countries has a fairly even performance of their research universities, presumably because they have had fairly even levels of government support. This group is in northern continental Europe and includes Switzerland, Germany, Sweden, the Netherlands, Austria, Denmark and Finland. The other group of countries also has a high average volume and quality of research publications, but spread much more unevenly between universities. This group includes the US, the UK and Canada.

This finding is influenced by the measure I chose to examine countries’ performance, the average of their research universities’ performance. Other results may have been found using another measure of countries’ performance, such as the number of universities a country has in the top 100 or 500 of research universities normalised by gross domestic product. But such a measure would not reflect a country’s overall performance of their research universities, but only the performance of its champions. Whether one is interested in a country’s overall performance or just the performance of its champions depends on whether one believes more benefit is gained from a few outstanding performers or several excellent performers. That would usefully be the subject of another study.

Gavin Moodie

References

Central Intelligence Agency (2009) The world factbook (accessed 29 October 2009).

SCImago institutions rankings (SIR): 2009 world report (revised edition accessed 20 October 2009).

THE-QS World University Rankings 2009: Year 6 of market making

THE-QSemailWell, an email arrived today and I just could not help myself…I clicked on the THE-QS World University Rankings 2009 links that were provided to see who received what ranking.  In addition, I did a quick Google scan of news outlets and weblogs to see what spins were already underway.

The THE-QS ranking seems to have become the locomotive for the Times Higher Education, a higher education newsletter that is published in the UK once per week.  In contrast to the daily Chronicle of Higher Education, and the daily Inside Higher Ed (both based in the US), the Times Higher Education seems challenged to provide quality content of some depth even on its relatively lax once per week schedule.  I spent four years in the UK in the mid-1990s, and can’t help but note the decline in the quality of the coverage of UK higher education news over the last decade plus.

It seems as if the Times Higher has decided to allocate most of its efforts to promoting the creation and propagation of this global ranking scheme in contrast to providing detailed, analytical, and critical coverage of issues in the UK, let alone in the European Higher Education Area. Six steady years of rankings generate attention, advertising revenue, and enhance some aspects of power and perceived esteem.  But, in the end, where is the Times Higher in analyzing the forces shaping the systems in which all of these universities are embedded, or the complex forces shaping university development strategies?  Rather, we primarily seem to get increasingly thin articles, based on relatively limited original research, heaps of advertising (especially jobs), and now regular build-ups to the annual rankings frenzy. In addition, their partnership with QS Quacquarelli Symonds is leading to new regional rankings; a clear form of market-making at a new unexploited geographic scale.  Of course there are some useful insights generated by rankings, but the rankings attention is arguably making the Times Higher lazier and dare I say, irresponsible, given the increasing significance of higher education to modern societies and economies.

In addition, I continue to be intrigued by how UK-based analysts and institutions seem infatuated with the term “international”, as if it necessarily means better quality than “national”. See, for example, the “international” elements of the current ranking in the figure below:

THEQSscore

Leaving aside my problems with the limited scale of the survey numbers (9,386 academics represent the “world’s” academics?; 3,281 firm representatives represent the “world’s” employers?), and the approach to weighting, why does the proportion of “international” faculty and students necessarily enhance the quality of university life?

Some universities, especially in Australasia and the UK, seek high proportions of international students to compensate for declining levels of government support, and weak levels of extramural funding via research income (which provides streams of income via overhead charges). Thus the higher number of international students may be, in some cases, inversely related to the quality of the university or the health of the public higher education system in which the university is embedded.

In addition, in some contexts, universities are legally required to limit “non-resident” student intake given the nature of the higher education system in place.  But in the metrics used here universities with the incentives and the freedom to let in large numbers of foreign students , for reasons other than the quality of said students, are rewarded with a higher rank.

The discourse of “international” is elevated here, much like it was in the last Research Assessment Exercise (RAE) in the UK, with “international” codeword for higher quality.  But international is just that – international – and it means nothing more than that unless we assess how good they (international students and faculty) are, what they contribute to the educational experience, and what lasting impacts they generate.

In any case, the THE-QS rankings are out.  The relative position of universities in the rankings will be debated about, and used to provide legitimacy for new or previously unrecognized claims. But it’s really the methodology that needs to be unpacked, as well as the nature and logics of the rankers, versus just the institutions that are being ranked.

Kris Olds

Moody’s ‘Special Comment’ report on the global recession and public/private universities

They say a year is a long time in politics. This last year has been a particularly long one, not only in political and policy circles, but for whole nations and their institutions. The sub-prime mortgage collapse quickly turned into a fiscal meltdown and is now a full-blown global recession.  ‘Hunkering down’, weathering the effects, and practicing ‘recession-style prudence and risk management’ is now the new game in town.

So how are universities doing in this highly uncertain, fiscally-brutal environment? Clearly there are many kinds of stories which can and are being told — from departments closing to new ventures being advanced.

One story being put forward is by Moody’s — one of the two big global rating agencies whose pronouncements on the creditworthiness of nations and institutions makes them particularly powerful and worth noting (see also our earlier background report on rating agencies and higher education).

In June, Moody’s released a Special Comment report on higher education called Global Recession and Universities: Funding Strains to Keep Up with Rising Demand which makes for particularly interesting reading. The lead author of the report is Roger Goodman, Vice President-Senior Credit Officer, Moody’s Investors Service, New York.  Our thanks to University World News for bringing the report to our attention in their 5 July story ‘US: Universities fair well in recession, says Moody’s‘), and to Moody’s for permission to publish the figure below.

Essentially their argument is that (particularly public):

…universities are proving to be appealing investments for government stimulus efforts due to the sector’s stabilising, countercyclical nature in the short term as well as its potential to stimulate long term economic growth.

…Most universities demonstrate countercyclical ability to increase student enrollments during recessions, receive relatively strong support from sponsoring governments, and offer long term potential for increasing revenue diversity.

On page 3 of their report, Moody’s offer a useful graphic on the enrollment impact of recessions (see Fig 1 below).

MoodysFig1

In other words, as the economy nose-dives, individuals are more likely to consider investing in more education as a means of waiting out the recession, and positioning themselves for the labour market when it revives. For Moody’s this all means a possible ‘tail-wind’ for universities as student demand increases — particularly those who have an access oriented agenda.

Moody’s Report outlines 5 key ideas:

  1. While universities will experience some stress, they will be more sheltered than other sectors.
  2. Public university ‘credit quality’ will be steadier than that of private universities
  3. Private universities can achieve a high rating if they are able to show evidence of sustained demand, financial strength and liquidity is clear
  4. Universities are likely to seek more alternative sources of funding to offset the pressure on government balance sheets and limitations on public funding growth
  5. Despite efforts at diversifying, the public sector will continue to play a central role

There are several issues worth noting here. The first is that individuals have been encouraged to invest in a graduate education, very often at considerable personal expense (loans and so on) with the promise of future earnings that outpace non-graduate earnings. If wages are depressed across the public and the private sectors because governments and firms are having to manage the consequences of bailing out the banks, then a graduate education might not be as appealing as it once was.

Second, aside from the stark black and white categorizing of ‘public’ and ‘private’ in this report (for instance, is the University of Sydney, or the University of Wisconsin-Madison, public or private given that both receive around 14-18% of their core budget from government funding?),  Moody’s also offers us something of a paradox.

To weather the storm, public universities are going to have to become more ‘private’ in order to augment meagre government budgets.  However, the more private a once public university is, the greater the risk. Is this not a classic case of catch-22?

Susan Robertson

CHERPA-network based in Europe wins tender to develop alternative global ranking of universities

rankings 4

Finally the decision on who has won the European Commission’s million euro tender – to develop and test a  global ranking of universities – has been announced.

The successful bid – the CHERPA network (or the Consortium for Higher Education and Research Performance Assessment), is charged with developing a ranking system to overcome what is regarded by the European Commission as the limitations of the Shanghai Jiao Tong and the QS-Times Higher Education schemes. The  final product is to be launched in 2011.

CHERPA is comprised of a consortium of leading institutions in the field within Europe; all have been developing and offering rather different approaches to ranking over the past few years (see our earlier stories here, here and  here for some of the potential contenders):

Will this new European Commission driven initiative set the proverbial European cat amongst the Transatlantic alliance pigeons?  rankings 1

As we have noted in earlier commentary on university rankings, the different approaches tip the rankings playing field in the direction of different interests. Much to the chagrin of the continental Europeans, the high status US universities do well on the Shanghai Jiao Tong University Ranking, whilst Britain’s QS-Times Higher Education tends to see UK universities feature more prominently.

CHERPA will develop a design that follows the so called ‘Berlin Principles on the ranking of higher education institutions‘. These principles stress the need to take into account the linguistic, cultural and historical contexts of the educational systems into account [this fact is something of an irony for those watchers following UK higher education developments last week following a Cabinet reshuffle – where reference to ‘universities’ in the departmental name was dropped.  The two year old Department for Innovation, Universities and Skills has now been abandoned in favor of a mega-Department for Business, Innovation and Skills! (read more here)].

According to one of the Consortium members website –  CHE:

The basic approach underlying the project is to compare only institutions which are similar and comparable in terms of their missions and structures. Therefore the project is closely linked to the idea of a European classification (“mapping”) of higher education institutions developed by CHEPS. The feasibility study will include focused rankings on particular aspects of higher education at the institutional level (e.g., internationalization and regional engagement) on the one hand, and two field-based rankings for business and engineering programmes on the other hand.

The field-based rankings will each focus on a particular type of institution and will develop and test a set of indicators appropriate to these institutions. The rankings will be multi-dimensional and will – like the CHE ranking – use a grouping approach rather than simplistic league tables. In contrast to existing global rankings, the design will compare not only the research performance of institutions but will include teaching & learning as well as other aspects of university performance.

The different rankings will be targeted at different stakeholders: They will support decision-making in universities and especially better informed study decisions by students. Rankings that create transparency for prospective students should promote access to higher education.

The University World News, in their report out today on the announcement, notes:

Testing will take place next year and must include a representative sample of at least 150 institutions with different missions in and outside Europe. At least six institutions should be drawn from the six large EU member states, one to three from the other 21, plus 25 institutions in North America, 25 in Asia and three in Australia.

There are multiple logics and politics at play here. On the one hand, a European ranking system may well give the European Commission more HE  governance capacity across Europe, strengthening its steering over national systems in areas like ‘internationalization’ and ‘regional engagement’ – two key areas that have been identified for work to be undertaken by CHERPA.

On the other hand, this new European ranking  system — when realized — might also appeal to countries in Latin America, Africa and Asia who currently do not feature in any significant way in the two dominant systems. Like the Bologna Process, the CHERPA ranking system might well find itself generating ‘echoes’ around the globe.

Or, will regions around the world prefer to develop and promote their own niche ranking systems, elements of which were evident in the QS.com Asia ranking that was recently launched.  Whatever the outcome, as we have observed before, there is a thickening industry with profits to be had on this aspect of the emerging global higher education landscape.

Susan Robertson

QS.com Asian University Rankings: niches within niches…within…

QS Asia 3Today, for the first time, the QS Intelligence Unit published their list of the top 100 Asian universities in their QS.com Asian University Rankings.

There is little doubt that the top performing universities have already added this latest branding to their websites, or that Hong Kong SAR will have proudly announced it has three universities in the top 5 while Japan has 2. QS Asia 2

QS.com Asian University Rankings is a spin-out from the QS World University Rankings published since 2005.  Last year, when the 2008 QS World University Rankings was launched, GlobalHigherEd posted an entry asking:  “Was this a niche industry in formation?”  This was in reference to strict copyright rules invoked – that ‘the list’ of decreasing ‘worldclassness’ could not be displayed, retransmitted, published or broadcast – as well as acknowledgment that rankings and associated activities can enable the building of firms such as QS Quacquarelli Symonds Ltd.

Seems like there are ‘niches within niches within….niches’ emerging in this game of deepening and extending the status economy in global higher education.  According to the QS Intelligence website:

Interest in rankings amongst Asian institutions is amongst the strongest in the world – leading to Asia being the first of a number of regional exercises QS plans to initiate.

The narrower the geographic focus of a ranking, the richer the available data can potentially be – the US News & World Report draws on 18 indicators, the Joong Ang Ilbo ranking in Korea on over 30. It is both appropriate and crucial then that the range of indicators used at a regional level differs from that used globally.

The objectives of each exercise are slightly different – whilst a global ranking seeks to identify truly world class universities, contributing to the global progress of science, society and scholarship, a regional ranking should adapt to the realities of the region in question.

Sure, the ‘regional niche’ allows QS.com to package and sell new products to Asian and other universities, as well as information to prospective students about who is regarded as ‘the best’.

However, the QS.com Asian University Rankings does more work than just that.  The ranking process and product places ‘Asian universities’ into direct competition with each other, it reinforces a very particular definition of ‘Asia’ and therefore Asian regionalism, and it services an imagined emerging Asian regional education space.

All this, whilst appearing to level the playing field by invoking regional sentiments.

Susan Robertson

CRELL: critiquing global university rankings and their methodologies

This guest entry has been kindly prepared for us by Beatrice d’Hombres and Michaela Saisana of the EU-funded Centre for Research on Lifelong Learning (CRELL) and Joint Research Centre. This entry is part of a series on the processes and politics of global university rankings (see herehere, here and here).

beatriceSince 2006, Beatrice d’Hombres has been working in the Unit of Econometrics and Statistics of the Joint Research Centre of  the European Commission. She is part of the Centre for Research on Lifelong Learning. Beatrice is an economist who completed a PhD at the University of Auvergne (France). She has a particular expertise in education economics and applied econometrics.

michaela

Michaela Saisana works for the Joint Research Centre (JRC) of the European Commission at the Unit of Econometrics and Applied Statistics. She has a PhD in Chemical Engineering and in 2004 she won the European Commission – JRC Young Scientist Prize in Statistics and Econometrics for her contribution on the robustness assessment of composite indicators and her work on sensitivity analysis.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The expansion of the access to higher education, the growing mobility of students, the need for economic rationale behind the allocation of public funds, together with the demand for higher accountability and transparency, have all contributed to raise the need for comparing university quality across countries.

The recognition of this fact has also been greatly stirred  by the publication, since 2003, of the ‘Shanghai Jiao Tong University Academic Ranking of World Universities’ (henceforth SJTU), which measures university research performance across the world. The SJTU ranking tends to reinforce the evidence that the US is well ahead of Europe in terms of cutting-edge university research.

Its rival is the ranking computed annually, since 2004, by the Times Higher Education Supplement (henceforth THES). Both these rankings are now receiving worldwide attention and constitute an occasion for national governments to comment on the relative performances of their national universities.

In France, for example, the publication of the SJTU is always associated with a surge of articles in newspapers which either bemoan  the poor performance of French universities or denounce the inadequacy of the SJTU ranking to properly assess the attractiveness of the fragmented French higher education institutions landscape (see Les Echos, 7 August 2008).

Whether the intention of the rankers or not, university rankings have followed a destiny of their own and are used by national policy makers to stimulate debates about national university systems and ultimately can lead to specific education policies orientations.

At the same time, however, these rankings are subject to a plethora of criticism. They outline that the chosen indicators are mainly based on research performance with no attempt to take into account the others missions of universities (in particular teaching), and are biased towards large, English-speaking and hard-science institutions. Whilst the limitations of the indicators underlying the THES or the SJTU rankings have been extensively discussed in the relevant literature, there has been no attempt so far to examine in depth the volatility of the university ranks to the methodological assumptions made in compiling the rankings.

crell3The purpose of the JRC/Centre for Research on Lifelong Learning (CRELL) report is to fill in this gap by quantifying how much university rankings depend on the methodology and to reveal whether the Shanghai ranking serves the purposes it is used for, and if its immediate European alternative, the British THES, can do better.

To that end, we carry out a thorough uncertainty and sensitivity analysis of the 2007 SJTU and THES rankings under a plurality of scenarios in which we activate simultaneously different sources of uncertainty. The sources cover a wide spectrum of methodological assumptions (set of selected indicators, weighting scheme, and aggregation method).

This implies that we deviate from the classic approach – also taken in the two university ranking systems – to build a composite indicator by a simple weighted summation of indicators. Subsequently, a frequency matrix of the university ranks is calculated across the different simulations. Such a multi-modeling approach and the presentation of the frequency matrix, rather than the single ranks, allows one to deal with the criticism, often made to league tables and rankings systems ,that ranks are presented as if they were calculated under conditions of certainty while this is rarely the case.  crell

The main findings of the report are the following. Both rankings are only robust in the identification of the top 15 performers on either side of the Atlantic, but unreliable on the exact ordering of all other institutes. And, even when combining all twelve indicators in a single framework, the space of the inference is too wide for about 50 universities of the 88 universities we studied and thus no meaningful rank can be estimated for those universities. Finally, the JRC report suggests that THES and SJTU rankings should be improved along two main directions:

  • first, the compilation of university rankings should always be accompanied by a robustness analysis based on a multi-modeling approach. We believe that this could constitute an additional recommendation to be added to the already 16 existing Berlin Principles;
  • second, it is necessary to revisit the set of indicators, so as to enrich it with other dimensions that are crucial to assessing university performance and which are currently missing.

Beatrice d’Hombres  and Michaela Saisana

Ranking – in a different (CHE) way?

uwe_brandenburg_2006-005nl GlobalHigherEd has been profiling a series of entries on university rankings as an emerging industry and technology of governance. This entry has been kindly prepared for us by Uwe Brandenburg. Since 2006 Uwe has been project manager at the Centre for Higher Education Development (CHE) and CHE Consult, a think tank and consultancy focusing on higher education reform.  Uwe has an MA in Islamic Studies, Politics and Spanish from the University of Münster (Germany),  and an MscEcon in Politics from the University of Wales at Swansea.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Talking about rankings usually means talking about league tables. Values are calculated based on weighed indicators which are then turned into a figure, added and formed into an overall value, often with the index of 100 for the best institution counting down. Moreover, in many cases entire universities are compared and the scope of indicators is somewhat limited. We at the Centre for Higher Education Development (CHE) are highly sceptical about this approach. For more than 10 years we have been running our own ranking system which is so different to the point that  some experts  have argued that it might not be a ranking at all which is actually not true. Just because the Toyota Prius is using a very different technology to produce energy does not exclude it from the species of automobiles. What are then the differences?

uwe1

Firstly, we do not believe in the ranking of entire HEIs. This is mainly due to the fact that such a ranking necessarily blurs the differences within an institution. For us, the target group has to be the starting point of any ranking exercise. Thus, one can fairly argue that it does not help a student looking for a physics department to learn that university A is average when in fact the physics department is outstanding, the sociology appalling and the rest is mediocre. It is the old problem of the man with his head in the fire and the feet in the freezer. A doctor would diagnose that the man is in a serious condition while a statistician might claim that over all he is doing fine.

So instead we always rank on the subject level. And given the results of the first ExcellenceRanking which focused on natural sciences and mathematics in European universities with a clear target group of prospective Master and PhD students, we think that this proves the point;  only 4 institutions excelled in all four subjects; another four in three; while most excelled in only one subject. And this was in a quite closely related field.

uwe2

Secondly, we do not create values by weighing indicators and then calculating an overall value. Why is that? The main reason is that any weight is necessarily arbitrary, or in other words political. The person weighing decides which weight to give. By doing so, you pre-decide the outcome of any ranking. You make it even worse when you then add the different values together and create one overall value because this blurs differences between individual indicators.

Say a discipline is publishing a lot but nobody reads it. If you give publications a weight of 2 and citations a weight of one, it will look like the department is very strong. If you do it the other way, it will look pretty weak. If you add the values you make it even worse because you blur the difference between both performances. And those two indicators are even rather closely related. If you summarize results from research indicators with reputation indicators, you make things entirely irrelevant.

Instead, we let the indicator results stand for their own and let the user decide what is important for his or her personal decision-making process. e.g., in the classical ranking we allow the users to create “my ranking” so they can choose the indicators they want to look at and in which order.

Thirdly, we strongly object to the idea of league tables. If the values which create the table are technically arbitrary (because of the weighing and the accumulation), the league table positions create the even worse illusion of distinctive and decisive differences between places. They then bring alive the impression of an existing difference in quality (no time or space here to argue the tricky issue of what quality might be) which is measurable to the percentage point. In other words, that there is a qualitative and objectively recognizable measurable difference between place number 12 and 15. Which is normally not the case.

Moreover, small mathematical differences can create huge differences in league table positions. Take the THES QS: even in the subject cluster SocSci you find a mere difference of 4.3 points on a 100 point scale between league rank 33 and 43. In the overall university rankings, it is a meager 6.7 points difference between rank 21 and 41 going down to a slim 15.3 points difference between rank 100 and 200. That is to say, the league table positions of HEIs might differ by much less than a single point or less than 1% (of an arbitrarily set figure). Thus, it tells us much less than the league position suggests.

Our approach, therefore, is to create groups (top, middle, bottom) which are referring to the performance of each HEI relative to the other HEIs.

uwe3

This means our rankings are not as easily read as the others. However,  we strongly believe in the cleverness of the users. Moreover, we try to communicate at every possible level that every ranking (and therefore also ours) is based on indicators which are chosen by the ranking institution. Consequently, the results of the respective ranking can tell you something about how an HEI performs in the framework of what the ranker thinks interesting, necessary, relevant, etc. Rankings therefore NEVER tell you who is the best but maybe (depending on the methodology) who is performing best (or in our cases better than average) in aspects considered relevant by the ranker.

A small, but highly relevant aspect might be added here. Rankings (in the HE system as well as in other areas of life) might suggest that a result in an indicator proves that an institution is performing well in the area measured by the indicator. Well it does not. All an indicator does is hint at the fact that given the data is robust and relevant, the results give some idea of how close the gap is between the performance of the institution and the best possible result (if such a benchmark exists). The important word is “hint” because “indicare” – from which the word “indicator” derives – means exactly this: a hint, not a proof. And in the case of many quantitative indicators, the “best” or “better” is again a political decision if the indicator stands alone (e.g. are more international students better? Are more exchange agreements better?).

This is why we argue that rankings have a useful function in terms of creating transparency if they are properly used, i.e. if the users are aware of the limitations, the purpose, the target groups and the agenda of the ranking organization and if the ranking is understood as one instrument among various others fit to make whatever decision related to an HEI (study, cooperation, funding, etc.).

Finally, modesty is maybe what a ranker should have in abundance. Running the excellence ranking in three different phases (initial in 2007, second phase with new subjects right now, repetition of natural sciences just starting) I am aware of certainly one thing. However strongly we aim at being sound and coherent, and however intensely we re-evaluate our efforts, there is always the chance of missing something; of not picking an excellent institution. For the world of ranking, Einstein’s conclusion holds a lot of truth:

Not everything that can be counted, counts and not everything that counts can be counted.

For further aspects see:
http://www.che-ranking.de/cms/?getObject=47&getLang=de
http://www.che-ranking.de/cms/?getObject=44&getLang=de
Federkeil, Gero, Rankings and Quality Assurance in Higher Education, in: Higher Education in Europe, 33, (2008), S. 209-218
Federkeil, Gero, Ranking Higher Education Institutions – A European Perspective., in: Evaluation in Higher Education, 2, (2008), S. 35 – 52
Other researchers specialising in this (and often referring to our method) are e.g. Alex Usher, Marijk van der Wende or Simon Marginson.

Uwe Brandenburg