On being seduced by The World University Rankings (2011-12)

Well, it’s ranking season again, and the Times Higher Education/Thomson Reuters World University Rankings (2011-2012) has just been released. The outcome is available here, and a screen grab of the Top 25 universities is available to the right. Link here for a pre-programmed Google News search for stories about the topic, and link here for Twitter-related items (caught via the #THEWUR hash tag).

Polished up further after some unfortunate fall-outs from last year, this year’s outcome promises to give us an all improved, shiny and clean result. But is it?

Like many people in the higher education sector, we too are interested in the ranking outcomes, not that there are many surprises, to be honest.

Rather, what we’d like to ask our readers to reflect on is how the world university rankings debate is configured. Configuration elements include:

  • Ranking outcomes: Where is my university, or the universities of country X, Y, and Z, positioned in a relative sense (to other universities/countries; to peer universities/countries; in comparison to last year; in comparison to an alternative ranking scheme)?
  • Methods: Is the adopted methodology appropriate and effective? How has it changed? Why has it changed?
  • Reactions: How are key university leaders, or ministers (and equivalents) reacting to the outcomes?
  • Temporality: Why do world university rankers choose to release the rankings on an annual basis when once every four or five years is more appropriate (given the actual pace of change within universities)? How did they manage to normalize this pace?
  • Power and politics: Who is producing the rankings, and how do they benefit from doing so? How transparent are they themselves about their operations, their relations (including joint ventures), their biases, their capabilities?
  • Knowledge production: As is patently evident in our recent entry ‘Visualizing the uneven geographies of knowledge production and circulation,’ there is an incredibly uneven structure to the production of knowledge, including dynamics related to language and the publishing business.  Given this, how do world university rankings (which factor in bibliometrics in a significant way) reflect this structural condition?
  • Governance matters: Who is governing whom? Who is being held to account, in which ways, and how frequently? Are the ranked capable of doing more than acting as mere providers of information (for free) to the rankers? Is an effective mechanism needed for regulating rankers and the emerging ranking industry? Do university leaders have any capability (none shown so far!) to collaborate on ranking governance matters?
  • Context(s): How do schemes like the THE’s World University Rankings, the Academic Ranking of World Universities (ARWU), and the QS World University Rankings, relate to broader attempts to benchmark higher education systems, institutions, and educational and research practices or outcomes? And here we flag the EU’s new U-Multirank scheme, and the OECD’s numerous initiatives (e.g., AHELO) to evaluate university performance globally, as well as engender debate about benchmarking too. In short, are rankings like the ones just released ‘fit for purpose’ in genuinely shed light on the quality, relevance and efficiency of higher education in a rapidly-evolving global context?

The Top 400 outcomes will and should be debated, and people will be curious about the relative place of their universities in the ranked list, as well as about the welcome improvements evident in the THE/Thomson Reuters methodology. But don’t be invited into distraction and only focus on some of these questions, especially those dealing with outcomes, methods, and reactions.

Rather, we also need to ask more hard questions about power, governance, and context, not to mention interests, outcomes, and potential collateral damage to the sector (when these rankings are released and then circulate into national media outlets, and ministerial desktops). There is a political economy to world university rankings, and these schemes (all of them, not just the THE World University Rankings) are laden with power and generative of substantial impacts; impacts that the rankers themselves often do not hear about, nor feel (e.g., via the reallocation of resources).

Is it not time to think more broadly, and critically, about the big issues related to the great ranking seduction?

Kris Olds & Susan Robertson

Rankings: a case of blurry pictures of the academic landscape?

Editors’ note: this guest entry has been kindly contributed by Pablo Achard (University of Geneva).  After a PhD in particle physics at CERN and the University of Geneva (Switzerland), Pablo Achard (pictured to the right) moved to the universities of Marseilles (France) then Antwerp (Belgium) and Brandeis (MA) to pursue research in computational neurosciences. He currently works at the University of Geneva where he supports the Rectorate on bibliometrics and strategic planning issues. Our thanks to Dr. Achard for this ‘insiders’ take on the challenges of making sense of world university rankings. 

Kris Olds & Susan Robertson

~~~~~~~~~~~~~~

If the national rankings of universities can be traced back in the 19th century, international rankings appeared somewhere in the beginning of the 21st century [1]. Shanghai Jiao Tong University’s and Times Higher Education’s (THE) rankings were among the pioneers and remain among the most visible ones. But you might have heard of similar league tables designed by the CSIC, the University of Leiden, the HEEACT, QS, the University of Western Australia, RatER, Mines Paris Tech, etc. Such a proliferation certainly responds to a high demand. But what are they worth? I argue here that rankings are blurry pictures of the academic landscape. As such, they are much better than complete blindness but should be used with great care.

Blurry pictures

The image of the academic landscape grabbed by the rankings is always a bit out-of-focus. This is improving with time and we should acknowledge the rankers who make considerable efforts to improve the sharpness. Nonetheless, the sharp image remains an impossible to reach ideal.

First of all, it is very difficult to get clean and comparable data on such a large scale. The reality is always grey, the action of counting is black or white. Take such a central element as a “researcher”. What should you count? Heads or full-time equivalents? Full-time equivalents based on their contracts or the effective time spent at the university? Do you include PhD “students”? Visiting scholars? Professors on sabbaticals? Research engineers? Retired professors who still run a lab? Deans who don’t? What do you do with researchers affiliated with non-university research organizations still loosely connected to a university (think of Germany or France here)? And how do you collect the data?

This toughness to obtain clean and comparable data is the main reason for the lack of any good indicator about teaching quality. To do it properly, one would need to evaluate the level of knowledge of the students upon graduation, and possibly compare it with their level when they entered the university. To this aim, OECD is launching a project called AHELO, but it is still in its pilot phase. In the meantime, some rankers use poor proxies (like the percentage of international students) while others focus their attention on research outcomes only.

Second, some indicators are very sensitive to “noise” due to small statistics. This is the case for the number of Nobel prizes used by the Shanghai’s ranking. No doubt that having 20 of them in your faculty says something about its quality. But having one, obtained years ago, for a work partly or fully done elsewhere? Because of the long tailed distribution of the university rankings, such a unique event won’t push a university ranked 100 into the top 10, but a university ranked 500 can win more than a hundred places.

This dynamic seemed to occur in the most recent THE ranking. In their new methodology, the “citation impact” of a university counts for one third of the final note. Not many details were given on how this impact is calculated. But the description on the THE’s website and the way this impact is calculated by Thomson Reuters – who provides the data to THE – in its commercial product InCites. makes me believe that they used the so-called “Leiden crown indicator”. This indicator is a welcome improvement to the raw ratio of citations per publications since it takes into account the citation behaviours of the different disciplines. But it suffers from instability if you look at a small set of publications or at publications in fields where you don’t expect many citations [2]: the denominator can become very small, leading to rocket high ratios. This is likely what happened with the Alexandria University. According to this indicator, this Alexandria ranks 4th in the world, surpassed only by Caltech, MIT and Princeton. This is an unexpected result for anyone who knows the world research landscape [3].

Third, it is well documented that the act of measuring triggers the act of manipulating the measure. And this is made easy when the data are provided by the university themselves, as for the THE or QS rankings. One can only be suspicious when reading the cases emphasized by Bookstein and colleagues. “For whatever reason, the quantity THES assigned to the University of Copenhagen staff-student ratio went from 51 (the sample median) in 2007 to 100 (a score attained by only 12 other schools in the top 200) […] Without this boost, Copenhagen’s […] ranking would have been 94 instead of 51. Another school with a 100 student-staff rating in 2009, Ecole Normale Supérieure, Paris, rose from the value of 68 just a year earlier, […] thus earning a ranking of 28 instead of 48.”

Pictures of a landscape are taken from a given point of view

But let’s suppose that the rankers can improve their indicators to obtain perfectly focused images. Let’s imagine that we have clean, robust and hardly manipulable data to rely on. Would the rankings give a neutral picture of the academic landscape? Certainly not. There is no such thing as “neutrality” in any social construct.

Some rankings are built with a precise output in mind. The most laughable example of this was Mines Paris Tech’s ranking, placing itself and four other French “grandes écoles” in the top 20. This is probably the worst flaw of any ranking. But other types of biases are always present, even if less visible.

Most rankings are built with a precise question in mind. Let’s look at the evaluation of the impact of research. Are you interested in finding the key players, in which case the volume of citations is one way to go? Or are you interested in finding the most efficient institutions, in which case you would normalize the citations to some input (number of articles or number of researchers or budget)? Different questions need different indicators, hence different rankings. This is the approach followed by Leiden which publishes several rankings at a time. However this is not the sexiest and media-friendly approach.

Finally, all rankings are built with a model of what a good university is in mind. “The basic problem is that there is no definition of the ideal university”, a point made forcefully today by University College London’s Vice-Chancellor. Often, the Harvard model is the implicit model. In this case, getting Harvard on top is a way to check for “mistakes” in the design of the methodology. But the missions of the university are many. One usually talks about the production (research) and the dissemination (teaching) of knowledge, together with a “third mission” towards society that can in turn have many different meanings, from the creation of spin-offs to the reduction of social inequities. For these different missions, different indicators are to be used. The salary of fresh graduates is probably a good indicator to judge MBAs and certainly a bad one for liberal art colleges.

To pursue the metaphor with photography, every single snapshot is taken from a given point of view and with a given aim. Point-of-views and aims can be visible as it is the case in artistic photography. They can also pretend to neutrality, as in photojournalism. But this neutrality is wishful thinking. The same applies for rankings.

Useful pictures

Rankings are nevertheless useful pictures. Insiders who have a comprehensive knowledge of the global academic landscape understandably laugh at rankings’ flaws. However the increase in the number of rankings and in their use tells us that they fill a need. Rankings can be viewed as the dragon of New Public Management and accountability assaulting the ivory tower of disinterested knowledge. They certainly participate to a global shift in the contract between society and universities. But I can hardly believe that the Times would spend thousands if not millions for such a purpose.

What then is the social use of rankings? I think they are the most accessible vision of the academic landscape for millions of “outsiders”. The CSIC ranks around 20,000 (yes twenty thousand!) higher education institutions. Who can expect everyone to be aware of their qualities?  Think of young students, employers, politicians or academics from not-so-well connected universities. Is everyone in the Midwest able to evaluate the quality of research at a school strangely named Eidgenössische Technische Hochschule Zürich?

Even to insiders, rankings tell us something. Thanks to improvements in the picture’s quality and to the multiplication of point-of-views, rankings form an image that is not uninteresting. If a university is regularly in the top 20, this is something significant. You can expect to find there one of the best research and teaching environment. If it is regularly in the top 300, this is also significant. You can expect to find one of the few universities where the “global brain market” takes place. If a country – like China – increases its share of good universities over time, this is significant and that a long-term ‘improvement’ (at least in the direction of what is being ranked as important) of its higher education system is under way.

Of course, any important decision concerning where to study, where to work or which project to embark on must be taken with more criteria than rankings. As one would never go for mountain climbing based solely on blurry snapshots of the mountain range, one should not use rankings as a unique source of information about universities.

Pablo Achard


Notes

[1] See The Great Brain Race. How Global Universities are Reshaping the World, Ben Wildavsky, Princeton Press 2010; and more specifically its chapter 4 “College rankings go global”.

[2] The Leiden researchers have recently decided to adopt a more robust indicator for their studies http://arxiv.org/abs/1003.2167 But whatever the indicator used, the problem will remain for small statistical samples.

[3] See recent discussions on the University Ranking Watch blog for more details on this issue.



Bibliometrics, global rankings, and transparency

Why do we care so much about the actual and potential uses of bibliometrics (“the generic term for data about publications,” according to the OECD), and world university ranking methodologies, but care so little about the private sector firms, and their inter-firm relations, that drive the bibliometrics/global rankings agenda forward?

This question came to mind when I was reading the 17 June 2010 issue of Nature magazine, which includes a detailed assessment of various aspects of bibliometrics, including the value of “science metrics” to assess aspects of the impact of research output (e.g., publications) as well as “individual scientific achievement”.

The Nature special issue, especially Richard Van Noorden’s survey on the “rapidly evolving ecosystem” of [biblio]metrics, is well worth a read. Even though bibliometrics can be a problematic and fraught dimension of academic life, they are rapidly becoming an accepted dimension of the governance (broadly defined) of higher education and research. Bibliometrics are generating a diverse and increasingly deep impact regarding the governance process at a range of scales, from the individual (a key focus of the Nature special issue) through to the unit/department, the university, the discipline/field, the national, the regional, and the global.

Now while the development process of this “eco-system” is rapidly changing, and a plethora of innovations are occurring regarding how different disciplines/fields should or should not utilize bibliometrics to better understand the nature and impact of knowledge production and dissemination, it is interesting to stand back and think about the non-state actors producing, for profit, this form of technology that meshes remarkably well with our contemporary audit culture.

In today’s entry, I’ve got two main points to make, before concluding with some questions to consider.

First, it seems to me that there is a disproportionate amount of research being conducted on the uses and abuses of metrics in contrast to research on who the producers of these metrics are, how these firms and their inter-firm relations operate, and how they attempt to influence the nature of academic practice around the world.

Now, I am not seeking to imply that firms such as Elsevier (producer of Scopus), Thomson Reuters (producer of the ISI Web of Knowledge), and Google (producer of Google Scholar), are necessarily generating negative impacts (see, for example, ‘Regional content expansion in Web of Science®: opening borders to exploration’, a good news news story from Thomson Reuters that we happily sought out), but I want to make the point that there is a glaring disjuncture between the volume of research conducted on bibliometrics versus research on these firms (the bibliometricians), and how these technologies are brought to life and to market. For example, a search of Thomson Reuter’s ISI Web of Knowledge for terms like Scopus, Thomson Reuters, Web of Science and bibliometrics generates a nearly endless list of articles comparing the main data bases, the innovations associated with them, and so on, but amazingly little research on Elsevier or Thomson Reuters (i.e. the firms).  From thick to thin, indeed, and somewhat analogous to the lack of substantial research available on ratings agencies such as Moody’s or Standard and Poor’s.

Second, and on a related note, the role of firms such as Elsevier and Thomson Reuters, not to mention QS Quacquarelli Symonds Ltd, and TSL Education Ltd, in fueling the global rankings phenomenon has received remarkably little attention in contrast to vigorous debates about methodologies. For example, the four main global ranking schemes, past and present:

all draw from the databases provided by Thomson Reuters and Elsevier.

One of the interesting aspects of the involvement of these firms with the rankings phenomenon is that they have helped to create a normalized expectation that rankings happen once per year, even though there is no clear (and certainly not stated) logic for such a frequency. Why not every 3-4 years, for example, perhaps in alignment with the World Cup or the Olympics? I can understand why rankings have to happen more frequently than the US’ long-delayed National Research Council (NRC) scheme, and they certainly need to happen more frequently than the years France wins the World Cup championship title (sorry…) but why rank every single year?

But, let’s think about this issue with the firms in mind versus the pros and cons of the methodologies in mind.

From a firm perspective, the annual cycle arguably needs to become normalized for it is a mechanism to extract freely provided data out of universities. This data is clearly used to rank but is also used to feed into the development of ancillary services and benchmarking capabilities that can be sold back to universities, funding councils, foundations, regional organizations (e.g., the European Commission which is intensely involved in benchmarking and now bankrolling a European ranking scheme), and the like.

QS Quacquarelli Symonds Ltd, for example, was marketing such services (see an extract, above, from a brochure) at their stand at the recent NAFSA conference in Kansas City, while Thomson Reuters has been busy developing what they deem the Global Institutional Profiles Project. This latter project is being spearheaded by Jonathon Adams, a former Leeds University staff member who established a private firm (Evidence Ltd) in the early 1990s that rode the UK’s Research Assessment Excellence (RAE) and European ERA waves before being acquired by Thomson Reuters in January 2009.

Sophisticated on-line data entry portals (see a screen grab of one above) are also being created. These portals build a free-flow (at least one one-way) pipeline between the administrative offices of hundreds of universities around the world and the firms doing the ranking.

Data demands are becoming very resource consuming for universities. For example, the QS template currently being dealt with by universities around the world shows 14 main categories with sub-categories for each: all together there are 60 data fields, of which 10 are critical to the QS ranking exercise, to be launched in October 2010. Path dependency dynamics clearly exist for once the pipelines are laid the complexity of data requests can be gradually ramped up.

A key objective, then, seems to involve using annual global rankings to update fee-generating databases, not to mention boost intra-firm knowledge bases and capabilities (for consultancies), all operational at the global scale.

In closing, is the posited disjuncture between research on bibliometrics vs research on bibliometricians and the information service firms these units are embedded within worth noting and doing something about?

Second, what is the rationale for annual rankings versus a more measured rankings window, in a temporal sense? Indeed why not synchronize all global rankings to specific years (e.g., 2010, 2014, 2018) so as to reduce strains on universities vis a vis the provision of data, and enable timely comparisons between competing schemes. A more measured pace would arguably reflect the actual pace of change within our higher education institutions versus the needs of these private firms.

And third, are firms like Thomson Reuters and Elsevier, as well as their partners (esp., QS Quacquarelli Symonds Ltd and TSL Education Ltd), being as transparent as they should be about the nature of their operations? Perhaps it would be useful to have accessible disclosures/discussions about:

  • What happens with all of the data that universities freely provide?
  • What is stipulated in the contracts between teams of rankers (e.g., Times Higher Education and Thomson Reuters)?
  • What rights do universities have regarding the open examination and use of all of the data and associated analyses created on the basis of the data universities originally provided?
  • Who should be governing, or at least observing, the relationship between these firms and the world’s universities? Is this relationship best continued on a bilateral firm to university basis? Or is the current approach inadequate? If it is perceived to be inadequate, should other types of actors be brought into the picture at the national scale (e.g., the US Department of Education or national associations of universities), the regional-scale (e.g., the European University Association), and/or the global scale (e.g., the International Association of Universities)?

In short, is it not time that the transparency agenda the world’s universities are being subjected to also be applied to the private sector firms that are driving the bibliometrics/global rankings agenda forward?

Kris Olds

Developments in world institutional rankings; SCImago joins the club

Editor’s note: this guest entry was kindly written by Gavin Moodie, principal policy adviser of Griffith University in Australia.  Gavin (pictured to the right) is most interested in the relations between vocational and higher education. His book From Vocational to Higher Education: An International Perspective was published by McGraw-Hill last year. Gavin’s entry sheds light on a new ranking initiative that needs to be situated within the broad wave of contemporary rankings – and bibliometrics more generally – that are being used to analyze, legitimize, critique, promote, not to mention extract revenue from.  Our thanks to Gavin for the illuminating contribution below.

~~~~~~~~~~~~~~~~~~~~~~~~

It has been a busy time for world institutional rankings watchers recently. Shanghai Jiao Tong University’s Institute of Higher Education published its academic ranking of world universities (ARWU) for 2009. The institute’s 2009 rankings include its by now familiar ranking of 500 institutions’ overall performance and the top 100 institutions in each of five broad fields: natural sciences and mathematics, engineering/technology and computer sciences, life and agriculture sciences, clinical medicine and pharmacy, and social sciences. This year Dr. Liu and his colleagues have added rankings of the top 100 institutions in each of five subjects: mathematics, physics, chemistry, computer science and economics/business.

Times Higher Education announced that over the next few months it will develop a new method for its world university rankings which in future will be produced with Thomson Reuters. Thomson Reuters’ contribution will be guided by Jonathan Adams (Adams’ firm, Evidence Ltd, was recently acquired by Thomson Reuters).

And a new ranking has been published, SCImago institutions rankings: 2009 world report. This is a league table of research institutions by various factors derived from Scopus, the database of the huge multinational publisher Elsevier. SCImago’s institutional research rank is distinctive in including with higher education institutions government research organisations such as France’s Centre National de la Recherche Scientifique, health organisations such as hospitals, and private and other organisations. Only higher education institutions are considered here. The ranking was produced by the SCImago Research Group, a Spain-based research network “dedicated to information analysis, representation and retrieval by means of visualisation techniques”.

SCImago’s rank is very useful in not cutting off at the top 200 or 500 universities, but in including all organisations with more than 100 publications indexed in Scopus in 2007. It therefore includes 1,527 higher education institutions in 83 countries. But even so, it is highly selective, including only 16% of the world’s estimated 9,760 universities, 76% of US doctoral granting universities, 65% of UK universities and 45% of Canada’s universities. In contrast all of New Zealand’s universities and 92% of Australia’s universities are listed in SCImago’s rank. Some 38 countries have seven or more universities in the rank.

SCImago derives five measures from the Scopus database: total outputs, cites per document (which are heavily influenced by field of research as well as research quality), international collaboration, normalised Scimago journal rank and normalised citations per output. This discussion will concentrate on total outputs and normalised citations per output.

Together these measures show that countries have been following two broad paths to supporting their research universities. One group of countries in northern continental Europe around Germany have supported a reasonably even development of their research universities, while another group of countries influenced by the UK and the US have developed their research universities much more unevenly. Both seem to be successful in support research volume and quality, at least as measured by publications and citations.

Volume of publications

Because a reasonable number of countries have several higher education institutions listed in SCImago’s rank it is possible to consider countries’ performance rather than concentrate on individual institutions as the smaller ranks encourage. I do this by taking the average of the performance of each country’s universities. The first measure of interest is the number of publications each university has indexed in Scopus over the five years from 2003 to 2007, which is an indicator of the volume of research. The graph in figure 1 shows the mean number of outputs for each country’s higher education research institutions. It shows only countries which have more than six universities included in SCImago’s rank, which leaves out 44 countries and thus much of the tail in institutions’ performance.

Figure 1: mean of universities’ outputs for each country with > 6 universities ranked


These data are given in table 1. The first column gives the number of higher education institutions each country has ranked in SCImago institutions rankings (SIR): 2009 world report. The second column shows the mean number of outputs indexed in Scopus for each country’s higher education research institutions from 2003 to 2007. The next column shows the standard deviation of the number of outputs for each country’s research university.

The third column in table 1 shows the coefficient of variation, which is the standard deviation divided by the mean and multiplied by 100. This is a measure of the evenness of the distribution of outputs amongst each country’s universities. Thus, the five countries whose universities had the highest average number of outputs indexed in Scopus from 2003 to 2007 – the Netherlands, Israel, Belgium, Denmark and Sweden – also had a reasonably low coefficient of variation below 80. This indicates that research volume is spread reasonably evenly amongst those countries’ universities. In contrast, Canada which had the sixth highest average number of outputs also has a reasonably high coefficient of variation of 120, indicating an uneven distribution of outputs amongst Canada’s research universities.

The final column in table 1 shows the mean of SCImago’s international collaboration score, which is a score of the proportions of the institution’s outputs jointly authored with someone from another country. The US’ international collaboration is rather low because US authors collaborate more often with authors in other institutions within the country.

Table 1: countries with > 6 institutions ranked by institutions’ mean outputs, 2007

Source: SCImago Research Group (2009) SCImago institutions rankings (SIR): 2009 world report.

Citations per paper by field

We next examine citations per paper by field of research, which is an indicator of the quality of research. This is the ratio between the average citations per publication of an institution and the world number of citations per publication over the same time frame and subject area. SCImago says it computed this ratio using the method established by Sweden’s Karolinska Intitutet which it called the ‘Item oriented field normalized citation score average’. A score of 0.8 means the institution is cited 20% below average and 1.3 means the institution is cited 30% above average.

Figure 2 shows mean normalised citations per paper for each country’s higher education research institutions from 2003 to 2007, again showing only countries which have more than six universities included in SCImago’s rank. The graph for an indicator of research quality in figure 2 is similar in shape to the graph of research volume in figure 1.

Figure 2: mean of universities’ normalised citations per paper for each country with > 6 universities ranked

Table 2 shows countries with more than six higher education research institutions ranked by their institutions’ mean normalised citations. This measure distinguishes more sharply between institutions than volume of outputs – the coefficient of variations for countries’ mean institutions normalised citations are higher than for number of publications. Nonetheless, several countries with high mean normalised citations have an even performance amongst their universities on this measure – Switzerland, Netherlands, Sweden, Germany, Austria, France, Finland and New Zealand.

Finally, I wondered whether countries which had a reasonably even performance of their research universities by volume and quality of publications reflected a more equal society. To test this I obtained from the Central Intelligence Agency’s (2009) World Factbook the Gini index of the distribution of family income within a country. A country with a Gini index of 0 would have perfect equality in the distribution of family income whereas a country with perfect inequality in its distribution of family would have a Gini index of 100. There is a modest correlation of 0.37 between a country’s Gini index and its coefficient of variation for both publications and citations.

Table 2: countries with > 6 institutions ranked by institutions’ normalised citations per output

Sources: SCImago Research Group (2009) SCImago institutions rankings (SIR): 2009 world report; Central Intelligence Agency (2009) The world factbook.

Conclusion

SCImago’s institutions research rank is sufficiently comprehensive to support comparisons between countries’ research higher education institutions. It finds two patterns amongst countries whose research universities have a high average volume and quality of research publications. One group of countries has a fairly even performance of their research universities, presumably because they have had fairly even levels of government support. This group is in northern continental Europe and includes Switzerland, Germany, Sweden, the Netherlands, Austria, Denmark and Finland. The other group of countries also has a high average volume and quality of research publications, but spread much more unevenly between universities. This group includes the US, the UK and Canada.

This finding is influenced by the measure I chose to examine countries’ performance, the average of their research universities’ performance. Other results may have been found using another measure of countries’ performance, such as the number of universities a country has in the top 100 or 500 of research universities normalised by gross domestic product. But such a measure would not reflect a country’s overall performance of their research universities, but only the performance of its champions. Whether one is interested in a country’s overall performance or just the performance of its champions depends on whether one believes more benefit is gained from a few outstanding performers or several excellent performers. That would usefully be the subject of another study.

Gavin Moodie

References

Central Intelligence Agency (2009) The world factbook (accessed 29 October 2009).

SCImago institutions rankings (SIR): 2009 world report (revised edition accessed 20 October 2009).

European ambitions: towards a ‘multi-dimensional global university ranking’

Further to our recent entries on European reactions and activities in relationship to global rankings schemes:

and a forthcoming guest contribution to SHIFTmag: Europe Talks to Brussels, ranking(s) watchers should examine this new tender for a €1,100,000 (maximum) contract for the ‘Design and testing the feasibility of a Multi-dimensional Global University Ranking’, to be completed by 2011.

dgecThe Terms of Reference, which hs been issued by the European Commission, Directorate-General for Education and Culture, is particularly insightful, while this summary conveys the broad objectives of the initiative:

The new ranking to be designed and tested would aim to make it possible to compare and benchmark similar institutions within and outside the EU, both at the level of the institution as a whole and focusing on different study fields. This would help institutions to better position themselves and improve their development strategies, quality and performances. Accessible, transparent and comparable information will make it easier for stakeholders and, in particular, students to make informed choices between the different institutions and their programmes. Many existing rankings do not fulfil this purpose because they only focus on certain aspects of research and on entire institutions, rather than on individual programmes and disciplines. The project will cover all types of universities and other higher education institutions as well as research institutes.

The funding is derived out of the Lifelong Learning policy and program stream of the Commission.

Thus we see a shift, in Europe, towards the implementation of an alternative scheme to the two main global ranking schemes, supported by substantial state resources at a regional level. It will be interesting to see how this eventual scheme complements and/or overturns the other global ranking schemes that are products of media outlets, private firms, and Chinese universities.

Kris Olds

New 2008 Shanghai rankings, by rankers who also certify rankers

Benchmarking, and audit culture more generally, are clearly the issues of the week. Following our coverage of a new Standard and Poor’s credit rating report regarding UK universities (‘Passing judgment’: the role of credit rating agencies in the global governance of UK universities‘), the Chronicle of Higher Education just noted that the 2008 Academic Ranking of World Universities (ARWU) (published by Shanghai Jiao Tong University) has been released on the web.

We’ve had more than a few stories about the pros and cons of rankings (e.g., 19 November’s  ‘University rankings: deliberations and future directions‘), but, of course, curiosity killed the cat so I eagerly plunged in for a quick scan.

Leaving aside the individual university scale, one of the most interesting representations of the data they collected, suspect though it might be, is this one:

The geographies, especially the disciplinary/field geographies, are noteworthy on a number of levels. The results are sure to propel the French (currently holding the rotating presidency of the Council of the European Union) into further action re., the deconstruction of the Shanghai methodology, and the development of alternatives (see my reference to this issue in the 6 July entry titled ‘Euro angsts, insights and actions regarding global university ranking schemes’).

I’m also not sure we can rely upon the recently established IREG-International Observatory on Academic Ranking and Excellence to shed unbiased light on the validity of the above table, and all the rest that are sure to be circulated, at the speed of light, through the global higher ed world over the next month or more. Why? Well, the IREG-International Observatory on Academic Ranking and Excellence, established on 18 April 2008, is supposed to:

review the conduct of “academic ranking” and expressions of “academic excellence” for the benefit of higher education, its stake-holders and the general public. This objective will be achieved by way of:

  • improving the standards, theory and practice in line with recommendations formulated in the Berlin Principles on Ranking of Higher Education Institutions;
  • initiating research and training related to ranking excellence;
  • analyzing the impact of ranking on access, recruitment trends and practices;
  • analyzing the role of ranking on institutional behavior;
  • enhancing public awareness and understanding of academic work.

Answering the explicit request of ranking bodies, the Observatory will review and assess selected rankings, based on methodological criteria and deontological standards of the Berlin Principles on Ranking of Higher Education Institutions. Successful ranking will be entities to declare they are “IREG Recognized”.

Now, who established the IREG-International Observatory on Academic Ranking and Excellence? A variety of ‘experts’ (photo below), including people associated with said Shanghai rankings, as well as U.S. News & World Report.

Forgive me if I am wrong, but is it not illogical, best intentions aside, to have rankers themselves on boards of institutions that seek to review “the conduct of ‘academic ranking’ and expressions of ‘academic excellence’ for the benefit of higher education, its stake-holders and the general public”, while also handing out IREG Recognized certifications (including to themselves, I presume)?

Kris Olds