Rankings: a case of blurry pictures of the academic landscape?

Editors’ note: this guest entry has been kindly contributed by Pablo Achard (University of Geneva).  After a PhD in particle physics at CERN and the University of Geneva (Switzerland), Pablo Achard (pictured to the right) moved to the universities of Marseilles (France) then Antwerp (Belgium) and Brandeis (MA) to pursue research in computational neurosciences. He currently works at the University of Geneva where he supports the Rectorate on bibliometrics and strategic planning issues. Our thanks to Dr. Achard for this ‘insiders’ take on the challenges of making sense of world university rankings. 

Kris Olds & Susan Robertson

~~~~~~~~~~~~~~

If the national rankings of universities can be traced back in the 19th century, international rankings appeared somewhere in the beginning of the 21st century [1]. Shanghai Jiao Tong University’s and Times Higher Education’s (THE) rankings were among the pioneers and remain among the most visible ones. But you might have heard of similar league tables designed by the CSIC, the University of Leiden, the HEEACT, QS, the University of Western Australia, RatER, Mines Paris Tech, etc. Such a proliferation certainly responds to a high demand. But what are they worth? I argue here that rankings are blurry pictures of the academic landscape. As such, they are much better than complete blindness but should be used with great care.

Blurry pictures

The image of the academic landscape grabbed by the rankings is always a bit out-of-focus. This is improving with time and we should acknowledge the rankers who make considerable efforts to improve the sharpness. Nonetheless, the sharp image remains an impossible to reach ideal.

First of all, it is very difficult to get clean and comparable data on such a large scale. The reality is always grey, the action of counting is black or white. Take such a central element as a “researcher”. What should you count? Heads or full-time equivalents? Full-time equivalents based on their contracts or the effective time spent at the university? Do you include PhD “students”? Visiting scholars? Professors on sabbaticals? Research engineers? Retired professors who still run a lab? Deans who don’t? What do you do with researchers affiliated with non-university research organizations still loosely connected to a university (think of Germany or France here)? And how do you collect the data?

This toughness to obtain clean and comparable data is the main reason for the lack of any good indicator about teaching quality. To do it properly, one would need to evaluate the level of knowledge of the students upon graduation, and possibly compare it with their level when they entered the university. To this aim, OECD is launching a project called AHELO, but it is still in its pilot phase. In the meantime, some rankers use poor proxies (like the percentage of international students) while others focus their attention on research outcomes only.

Second, some indicators are very sensitive to “noise” due to small statistics. This is the case for the number of Nobel prizes used by the Shanghai’s ranking. No doubt that having 20 of them in your faculty says something about its quality. But having one, obtained years ago, for a work partly or fully done elsewhere? Because of the long tailed distribution of the university rankings, such a unique event won’t push a university ranked 100 into the top 10, but a university ranked 500 can win more than a hundred places.

This dynamic seemed to occur in the most recent THE ranking. In their new methodology, the “citation impact” of a university counts for one third of the final note. Not many details were given on how this impact is calculated. But the description on the THE’s website and the way this impact is calculated by Thomson Reuters – who provides the data to THE – in its commercial product InCites. makes me believe that they used the so-called “Leiden crown indicator”. This indicator is a welcome improvement to the raw ratio of citations per publications since it takes into account the citation behaviours of the different disciplines. But it suffers from instability if you look at a small set of publications or at publications in fields where you don’t expect many citations [2]: the denominator can become very small, leading to rocket high ratios. This is likely what happened with the Alexandria University. According to this indicator, this Alexandria ranks 4th in the world, surpassed only by Caltech, MIT and Princeton. This is an unexpected result for anyone who knows the world research landscape [3].

Third, it is well documented that the act of measuring triggers the act of manipulating the measure. And this is made easy when the data are provided by the university themselves, as for the THE or QS rankings. One can only be suspicious when reading the cases emphasized by Bookstein and colleagues. “For whatever reason, the quantity THES assigned to the University of Copenhagen staff-student ratio went from 51 (the sample median) in 2007 to 100 (a score attained by only 12 other schools in the top 200) […] Without this boost, Copenhagen’s […] ranking would have been 94 instead of 51. Another school with a 100 student-staff rating in 2009, Ecole Normale Supérieure, Paris, rose from the value of 68 just a year earlier, […] thus earning a ranking of 28 instead of 48.”

Pictures of a landscape are taken from a given point of view

But let’s suppose that the rankers can improve their indicators to obtain perfectly focused images. Let’s imagine that we have clean, robust and hardly manipulable data to rely on. Would the rankings give a neutral picture of the academic landscape? Certainly not. There is no such thing as “neutrality” in any social construct.

Some rankings are built with a precise output in mind. The most laughable example of this was Mines Paris Tech’s ranking, placing itself and four other French “grandes écoles” in the top 20. This is probably the worst flaw of any ranking. But other types of biases are always present, even if less visible.

Most rankings are built with a precise question in mind. Let’s look at the evaluation of the impact of research. Are you interested in finding the key players, in which case the volume of citations is one way to go? Or are you interested in finding the most efficient institutions, in which case you would normalize the citations to some input (number of articles or number of researchers or budget)? Different questions need different indicators, hence different rankings. This is the approach followed by Leiden which publishes several rankings at a time. However this is not the sexiest and media-friendly approach.

Finally, all rankings are built with a model of what a good university is in mind. “The basic problem is that there is no definition of the ideal university”, a point made forcefully today by University College London’s Vice-Chancellor. Often, the Harvard model is the implicit model. In this case, getting Harvard on top is a way to check for “mistakes” in the design of the methodology. But the missions of the university are many. One usually talks about the production (research) and the dissemination (teaching) of knowledge, together with a “third mission” towards society that can in turn have many different meanings, from the creation of spin-offs to the reduction of social inequities. For these different missions, different indicators are to be used. The salary of fresh graduates is probably a good indicator to judge MBAs and certainly a bad one for liberal art colleges.

To pursue the metaphor with photography, every single snapshot is taken from a given point of view and with a given aim. Point-of-views and aims can be visible as it is the case in artistic photography. They can also pretend to neutrality, as in photojournalism. But this neutrality is wishful thinking. The same applies for rankings.

Useful pictures

Rankings are nevertheless useful pictures. Insiders who have a comprehensive knowledge of the global academic landscape understandably laugh at rankings’ flaws. However the increase in the number of rankings and in their use tells us that they fill a need. Rankings can be viewed as the dragon of New Public Management and accountability assaulting the ivory tower of disinterested knowledge. They certainly participate to a global shift in the contract between society and universities. But I can hardly believe that the Times would spend thousands if not millions for such a purpose.

What then is the social use of rankings? I think they are the most accessible vision of the academic landscape for millions of “outsiders”. The CSIC ranks around 20,000 (yes twenty thousand!) higher education institutions. Who can expect everyone to be aware of their qualities?  Think of young students, employers, politicians or academics from not-so-well connected universities. Is everyone in the Midwest able to evaluate the quality of research at a school strangely named Eidgenössische Technische Hochschule Zürich?

Even to insiders, rankings tell us something. Thanks to improvements in the picture’s quality and to the multiplication of point-of-views, rankings form an image that is not uninteresting. If a university is regularly in the top 20, this is something significant. You can expect to find there one of the best research and teaching environment. If it is regularly in the top 300, this is also significant. You can expect to find one of the few universities where the “global brain market” takes place. If a country – like China – increases its share of good universities over time, this is significant and that a long-term ‘improvement’ (at least in the direction of what is being ranked as important) of its higher education system is under way.

Of course, any important decision concerning where to study, where to work or which project to embark on must be taken with more criteria than rankings. As one would never go for mountain climbing based solely on blurry snapshots of the mountain range, one should not use rankings as a unique source of information about universities.

Pablo Achard


Notes

[1] See The Great Brain Race. How Global Universities are Reshaping the World, Ben Wildavsky, Princeton Press 2010; and more specifically its chapter 4 “College rankings go global”.

[2] The Leiden researchers have recently decided to adopt a more robust indicator for their studies http://arxiv.org/abs/1003.2167 But whatever the indicator used, the problem will remain for small statistical samples.

[3] See recent discussions on the University Ranking Watch blog for more details on this issue.



A case for free, open and timely access to world university rankings data

Well, the 2010 QS World University Rankings® were released last week and the results are continuing to generate considerable attention in the world’s media (link here for a pre-programmed Google news search of coverage).

For a range of reasons, news that QS placed Cambridge in the No. 1 spot, above Harvard, spurred on much of this media coverage (see, for example, these stories in Time, the Christian Science Monitor, and Al Jazeera). As Al Jazeera put it: “Did the Earth’s axis shift? Almost: Cambridge has nudged Harvard out of the number one spot on one major ranking system.”

Interest in the Cambridge over Harvard outcome led QS (which stands for QS Quacquarelli Symonds Ltd) to release this story (‘2010 QS World University Rankings® – Cambridge strikes back’). Do note, however, that Harvard scored 99.18/100 while QS gave Cambridge 100/100 (hence the 1/2 placing). For non-rankings watchers, Harvard had been pegged as No 1 for the previous five years in rankings that QS published in association with Times Higher Education.

As the QS story notes, the economic crisis in the US, as well as the reduction of other US universities with respect to their share of “international faculty,” was the main cause of Harvard’s slide:

In the US, cost-cutting reductions in academic staff hire are reflected among many of the leading universities in this year’s rankings. Yale also dropped 19 places for international faculty, Chicago dropped 8, Caltech dropped 20, and UPenn dropped 53 places in this measure. However, despite these issues the US retains its dominance at the top of the table, with 20 of the top 50 and 31 of the top 100 universities in the overall table.

Facts like these aside, what we would like to highlight is that all of this information gathering and dissemination — both the back-end (pre-ranking) provision of the data, and the front end (post-ranking) acquisition of the data — focuses the majority of costs on the universities and the majority of benefits on the rankers.

The first cost to universities is the provision of the data. As one of us noted in a recent entry (‘Bibliometrics, global rankings, and transparency‘):

Data demands are becoming very resource consuming for universities. For example, the QS template currently being dealt with by universities around the world shows 14 main categories with sub-categories for each: all together there are 60 data fields, of which 10 are critical to the QS ranking exercise, to be launched in October 2010. Path dependency dynamics clearly exist for once the pipelines are laid the complexity of data requests can be gradually ramped up.

Keep it mind that the data is provided for free, though in the end it is a cost primarily borne by the taxpayer (for most universities are public). It is the taxpayer that pays the majority of the administrators’ salaries to enable them to compile the data and submit it to the rankers.

A second, though indirect and obscured cost, relates to the use of rankings data by credit rating agencies like Moody’s or Standards and Poors in their ratings of the credit-worthiness of universities. We’ve reported on this in earlier blog entries (e.g., ‘‘Passing judgment’: the role of credit rating agencies in the global governance of UK universities‘). Given that cost of borrowing for universities is determined by their credit-worthiness, and rankings are used in this process, we can conclude that any increase in the cost of borrowing is actually also an increase in the cost of the university to the taxpayer.

Third, rankings can alter the views of people (students, faculty, investors) making decisions about mobility or resource allocation, and these decisions inevitably generate direct financial consequences for institutions and host city-regions. Given this it seems only fair that universities and city-region development agencies should be able to freely use the base rankings data for self-reflection and strategic planning, if they so choose to.

A fourth cost is subsequent access to the data. The rankings are released via a strategically planned media blitz, as are hints at causes for shifts in the placement of universities, but access to the base data — the data our administrative colleagues in universities in Canada, the US, the UK, Sweden, etc., supplied to the rankers — is not fully enabled.  Rather, this freely provided data is used as the basis for:

the development of ancillary services and benchmarking capabilities that can be sold back to universities, funding councils, foundations, regional organizations (e.g., the European Commission which is intensely involved in benchmarking and now bankrolling a European ranking scheme), and the like.

Consider, for example, this Thomson Reuters statement on their Global Institutional Profiles Project website:

The first use of the data generated in the Global Institutional Profiles Project was to inform the Times Higher Education World University Ranking. However, there are many other services that will rely on the Profiles Project data. For example the data can be used to inform customized analytical reporting or customized data sets for a specific customer’s needs.

Thomson Reuters is developing a platform designed for easy access and interpretation of this valuable data set. The platform will combine different sets of key indicators, with peer benchmarking and visualization tools to allow users to quickly identify the key strengths of institutions across a wide variety of aspects and subjects.

Now, as QS’s Ben Sowter put it:

Despite the inevitable efforts that will be required to respond to a wide variety of enquiries from academics, journalists and institutions over the coming days there is always a deep sense of satisfaction when our results emerge. The tension visibly lifts from the team as we move into a new phase of our work – that of explaining how and why it works as opposed to actually conducting the work.

This year has been the most intense yet, we have grown the team and introduced a new system, introduced new translations of surveys, spent more time poring over the detail in the Scopus data we receive, sent out the most thorough fact files yet to universities in advance of the release – we have driven engagement to a new level – evaluating, speaking to and visiting more universities than ever.

The point we would like to make is that the process of taking “engagement to a new level” — a process coordinated and enabled by QS Quacquarelli Symonds Ltd and Times Higher Education/Thomson Reuters — is solely dependent upon universities being willing to provide data to these firms for free.

Given all of these costs, access to all of the base data beyond the simple rankings available on websites like the THE World University Rankings 2010 (due out on 16 September), or QS World University Rankings Results 2010, should be freely accessible to all.

Detailed information should also be provided about which unit, within each university, provided the rankers with the data. This would enable faculty, students and staff within ranked institutions to engage in dialogue about ranking outcomes, methodologies, and so on, should they choose to. This would also prevent confusing mix-ups such as what occurred at the University of Waterloo (UW) this week when:

UW representative Martin van Nierop said he hadn’t heard that QS had contacted the university, even though QS’s website says universities are invited to submit names of employers and professors at other universities to provide opinions. Data analysts at UW are checking the rankings to see where the information came from.

And access to this data should be provided on a timely basis, as in exactly when the rankings are released to the media and the general public.

In closing, we are making a case for free, open and timely access to all world university rankings data from January 2011, ideally on a voluntary basis. Alternative mechanisms, including intergovernmental agreements in the context of the next Global Bologna Policy Forum (in 2012), could also facilitate such an outcome.

If we have learned anything to date about the open access debate, and ‘climategate’, greater transparency helps everyone — the rankers (who will get more informed and timely feedback about their adopted methodologies), universities (faculty, students & staff), scholars and students interested in the nature of ranking methodologies, government ministries and departments, and the taxpayers who support universities (and hence the rankers).

Inspiration for this case comes from many people, as well as the open access agenda that is partly driven on the principle that taxpayer funded research generates research outcomes that society should have free and open access to, and in a timely fashion.  Surely this open access principle applies just as well to university rankings data!

Another reason society deserves to have free, open and timely access to the data is that a change in practices will shed light on how the organizations ranking universities implement their methodologies; methodologies that are ever changing (and hence more open to error).

Finer-grained access to the data would enable us to check out exactly why, for example, Harvard deserved a 99.18/100 while Cambridge was allocated a 100/100. As professors who mark student papers, outcomes this close lead us to cross-check the data, lest we subtly favour one student over another for X, Y or Z reasons. And cross-checking is even more important given that ranking is a highly mediatized phenomenon, as is clearly evident this week betwixt and between releases of the hyper-competitive QS vs THE world university rankings.

Free, open and timely access to the world university rankings data is arguably a win-win-win scenario, though it will admittedly rebalance the current focus of the majority of the costs on the universities, and the majority of the benefits on the rankers. Yet it is in the interest of the world’s universities, and the taxpayers who support these universities, for this to happen.

Kris Olds & Susan Robertson

Developments in world institutional rankings; SCImago joins the club

Editor’s note: this guest entry was kindly written by Gavin Moodie, principal policy adviser of Griffith University in Australia.  Gavin (pictured to the right) is most interested in the relations between vocational and higher education. His book From Vocational to Higher Education: An International Perspective was published by McGraw-Hill last year. Gavin’s entry sheds light on a new ranking initiative that needs to be situated within the broad wave of contemporary rankings – and bibliometrics more generally – that are being used to analyze, legitimize, critique, promote, not to mention extract revenue from.  Our thanks to Gavin for the illuminating contribution below.

~~~~~~~~~~~~~~~~~~~~~~~~

It has been a busy time for world institutional rankings watchers recently. Shanghai Jiao Tong University’s Institute of Higher Education published its academic ranking of world universities (ARWU) for 2009. The institute’s 2009 rankings include its by now familiar ranking of 500 institutions’ overall performance and the top 100 institutions in each of five broad fields: natural sciences and mathematics, engineering/technology and computer sciences, life and agriculture sciences, clinical medicine and pharmacy, and social sciences. This year Dr. Liu and his colleagues have added rankings of the top 100 institutions in each of five subjects: mathematics, physics, chemistry, computer science and economics/business.

Times Higher Education announced that over the next few months it will develop a new method for its world university rankings which in future will be produced with Thomson Reuters. Thomson Reuters’ contribution will be guided by Jonathan Adams (Adams’ firm, Evidence Ltd, was recently acquired by Thomson Reuters).

And a new ranking has been published, SCImago institutions rankings: 2009 world report. This is a league table of research institutions by various factors derived from Scopus, the database of the huge multinational publisher Elsevier. SCImago’s institutional research rank is distinctive in including with higher education institutions government research organisations such as France’s Centre National de la Recherche Scientifique, health organisations such as hospitals, and private and other organisations. Only higher education institutions are considered here. The ranking was produced by the SCImago Research Group, a Spain-based research network “dedicated to information analysis, representation and retrieval by means of visualisation techniques”.

SCImago’s rank is very useful in not cutting off at the top 200 or 500 universities, but in including all organisations with more than 100 publications indexed in Scopus in 2007. It therefore includes 1,527 higher education institutions in 83 countries. But even so, it is highly selective, including only 16% of the world’s estimated 9,760 universities, 76% of US doctoral granting universities, 65% of UK universities and 45% of Canada’s universities. In contrast all of New Zealand’s universities and 92% of Australia’s universities are listed in SCImago’s rank. Some 38 countries have seven or more universities in the rank.

SCImago derives five measures from the Scopus database: total outputs, cites per document (which are heavily influenced by field of research as well as research quality), international collaboration, normalised Scimago journal rank and normalised citations per output. This discussion will concentrate on total outputs and normalised citations per output.

Together these measures show that countries have been following two broad paths to supporting their research universities. One group of countries in northern continental Europe around Germany have supported a reasonably even development of their research universities, while another group of countries influenced by the UK and the US have developed their research universities much more unevenly. Both seem to be successful in support research volume and quality, at least as measured by publications and citations.

Volume of publications

Because a reasonable number of countries have several higher education institutions listed in SCImago’s rank it is possible to consider countries’ performance rather than concentrate on individual institutions as the smaller ranks encourage. I do this by taking the average of the performance of each country’s universities. The first measure of interest is the number of publications each university has indexed in Scopus over the five years from 2003 to 2007, which is an indicator of the volume of research. The graph in figure 1 shows the mean number of outputs for each country’s higher education research institutions. It shows only countries which have more than six universities included in SCImago’s rank, which leaves out 44 countries and thus much of the tail in institutions’ performance.

Figure 1: mean of universities’ outputs for each country with > 6 universities ranked


These data are given in table 1. The first column gives the number of higher education institutions each country has ranked in SCImago institutions rankings (SIR): 2009 world report. The second column shows the mean number of outputs indexed in Scopus for each country’s higher education research institutions from 2003 to 2007. The next column shows the standard deviation of the number of outputs for each country’s research university.

The third column in table 1 shows the coefficient of variation, which is the standard deviation divided by the mean and multiplied by 100. This is a measure of the evenness of the distribution of outputs amongst each country’s universities. Thus, the five countries whose universities had the highest average number of outputs indexed in Scopus from 2003 to 2007 – the Netherlands, Israel, Belgium, Denmark and Sweden – also had a reasonably low coefficient of variation below 80. This indicates that research volume is spread reasonably evenly amongst those countries’ universities. In contrast, Canada which had the sixth highest average number of outputs also has a reasonably high coefficient of variation of 120, indicating an uneven distribution of outputs amongst Canada’s research universities.

The final column in table 1 shows the mean of SCImago’s international collaboration score, which is a score of the proportions of the institution’s outputs jointly authored with someone from another country. The US’ international collaboration is rather low because US authors collaborate more often with authors in other institutions within the country.

Table 1: countries with > 6 institutions ranked by institutions’ mean outputs, 2007

Source: SCImago Research Group (2009) SCImago institutions rankings (SIR): 2009 world report.

Citations per paper by field

We next examine citations per paper by field of research, which is an indicator of the quality of research. This is the ratio between the average citations per publication of an institution and the world number of citations per publication over the same time frame and subject area. SCImago says it computed this ratio using the method established by Sweden’s Karolinska Intitutet which it called the ‘Item oriented field normalized citation score average’. A score of 0.8 means the institution is cited 20% below average and 1.3 means the institution is cited 30% above average.

Figure 2 shows mean normalised citations per paper for each country’s higher education research institutions from 2003 to 2007, again showing only countries which have more than six universities included in SCImago’s rank. The graph for an indicator of research quality in figure 2 is similar in shape to the graph of research volume in figure 1.

Figure 2: mean of universities’ normalised citations per paper for each country with > 6 universities ranked

Table 2 shows countries with more than six higher education research institutions ranked by their institutions’ mean normalised citations. This measure distinguishes more sharply between institutions than volume of outputs – the coefficient of variations for countries’ mean institutions normalised citations are higher than for number of publications. Nonetheless, several countries with high mean normalised citations have an even performance amongst their universities on this measure – Switzerland, Netherlands, Sweden, Germany, Austria, France, Finland and New Zealand.

Finally, I wondered whether countries which had a reasonably even performance of their research universities by volume and quality of publications reflected a more equal society. To test this I obtained from the Central Intelligence Agency’s (2009) World Factbook the Gini index of the distribution of family income within a country. A country with a Gini index of 0 would have perfect equality in the distribution of family income whereas a country with perfect inequality in its distribution of family would have a Gini index of 100. There is a modest correlation of 0.37 between a country’s Gini index and its coefficient of variation for both publications and citations.

Table 2: countries with > 6 institutions ranked by institutions’ normalised citations per output

Sources: SCImago Research Group (2009) SCImago institutions rankings (SIR): 2009 world report; Central Intelligence Agency (2009) The world factbook.

Conclusion

SCImago’s institutions research rank is sufficiently comprehensive to support comparisons between countries’ research higher education institutions. It finds two patterns amongst countries whose research universities have a high average volume and quality of research publications. One group of countries has a fairly even performance of their research universities, presumably because they have had fairly even levels of government support. This group is in northern continental Europe and includes Switzerland, Germany, Sweden, the Netherlands, Austria, Denmark and Finland. The other group of countries also has a high average volume and quality of research publications, but spread much more unevenly between universities. This group includes the US, the UK and Canada.

This finding is influenced by the measure I chose to examine countries’ performance, the average of their research universities’ performance. Other results may have been found using another measure of countries’ performance, such as the number of universities a country has in the top 100 or 500 of research universities normalised by gross domestic product. But such a measure would not reflect a country’s overall performance of their research universities, but only the performance of its champions. Whether one is interested in a country’s overall performance or just the performance of its champions depends on whether one believes more benefit is gained from a few outstanding performers or several excellent performers. That would usefully be the subject of another study.

Gavin Moodie

References

Central Intelligence Agency (2009) The world factbook (accessed 29 October 2009).

SCImago institutions rankings (SIR): 2009 world report (revised edition accessed 20 October 2009).

QS.com Asian University Rankings: niches within niches…within…

QS Asia 3Today, for the first time, the QS Intelligence Unit published their list of the top 100 Asian universities in their QS.com Asian University Rankings.

There is little doubt that the top performing universities have already added this latest branding to their websites, or that Hong Kong SAR will have proudly announced it has three universities in the top 5 while Japan has 2. QS Asia 2

QS.com Asian University Rankings is a spin-out from the QS World University Rankings published since 2005.  Last year, when the 2008 QS World University Rankings was launched, GlobalHigherEd posted an entry asking:  “Was this a niche industry in formation?”  This was in reference to strict copyright rules invoked – that ‘the list’ of decreasing ‘worldclassness’ could not be displayed, retransmitted, published or broadcast – as well as acknowledgment that rankings and associated activities can enable the building of firms such as QS Quacquarelli Symonds Ltd.

Seems like there are ‘niches within niches within….niches’ emerging in this game of deepening and extending the status economy in global higher education.  According to the QS Intelligence website:

Interest in rankings amongst Asian institutions is amongst the strongest in the world – leading to Asia being the first of a number of regional exercises QS plans to initiate.

The narrower the geographic focus of a ranking, the richer the available data can potentially be – the US News & World Report draws on 18 indicators, the Joong Ang Ilbo ranking in Korea on over 30. It is both appropriate and crucial then that the range of indicators used at a regional level differs from that used globally.

The objectives of each exercise are slightly different – whilst a global ranking seeks to identify truly world class universities, contributing to the global progress of science, society and scholarship, a regional ranking should adapt to the realities of the region in question.

Sure, the ‘regional niche’ allows QS.com to package and sell new products to Asian and other universities, as well as information to prospective students about who is regarded as ‘the best’.

However, the QS.com Asian University Rankings does more work than just that.  The ranking process and product places ‘Asian universities’ into direct competition with each other, it reinforces a very particular definition of ‘Asia’ and therefore Asian regionalism, and it services an imagined emerging Asian regional education space.

All this, whilst appearing to level the playing field by invoking regional sentiments.

Susan Robertson

Regional content expansion in Web of Science®: opening borders to exploration

jim-testaEditor’s note: this guest entry was written by James Testa, Senior Director, Editorial Development & Publisher Relations, Thomson Reuters. It was originally published on an internal Thomson Reuters website. James Testa (pictured to the left) joined Thomson Reuters (then ISI) in 1983. From 1983 through 1996 he managed the Publisher Relations Department and was directly responsible for building and maintaining working relations with the over three thousand international scholarly publishers whose journals are indexed by Thomson Reuters.  In 1996 Mr. Testa was appointed the Director of Editorial Development. In this position he directed a staff of information professionals in the evaluation and selection of journals and other publication formats for coverage in the various Thomson Reuters products. In 2007 he was named Senior Director, Editorial Development & Publisher Relations.  In this combined role he continues to build content for Thomson Reuters products and work to increase efficiency in communication with the international STM publishing community. He is a member of the American Society of Information Science and Technology (ASIST) and has spoken frequently on behalf of Thomson Reuters in the Asia Pacific region, South America, and Europe.

Our thanks also go to Susan Besaw of Thomson Reuters for facilitating access to the essay. This guest entry ties in to one of our earlier entries on this topic (‘Thomson Reuters, China, and ‘regional’ journals: of gifts and knowledge production’), as well as a fascinating new entry (‘The Canadian Center of Science and Education and Academic Nationalism’) posted on the consistently excellent Scott Sommers’ Taiwan Blog.

~~~~~~~~~~~~~~~~~~~~~

thomsonreuterslogoThomson Reuters extends the power of its Journal Selection Process by focusing on the world’s best regional journals. The goal of this initiative is to enrich the collection of important and influential international journals now covered in Web of Science with a number of superbly produced journals whose content is of specific regional importance.

Since its inception nearly fifty years ago by Eugene Garfield, PhD, the primary goal of the Journal Selection Process has been to identify those journals which formed the core literature of the sciences, social sciences, and arts & humanities. These journals publish the bulk of scholarly research, receive the most citations from the surrounding literature, and have the highest citation impact of all journals published today. The journals selected for the Web of Science are, in essence, the scholarly publications that meet the broadest research needs of the international community of researchers. They have been selected on the basis of their high publishing standards, their editorial content, the international diversity of their contributing authors and editorial board members, and on their relative citation frequency and impact. International journals selected for the Web of Science define the very highest standards in the world of scholarly publishing.

In recent years, however, the user community of the Web of Science has expanded gradually from what was once a concentration of major universities and research facilities in the United States and Western Europe to an internationally diverse group including virtually all major universities and research centers in every region of the world. Where once the Thomson Reuters sales force was concentrated in Philadelphia and London, local staff are now committed to the service of customers at offices in Japan, Singapore, Australia, Brazil, China, France, Germany, Taiwan, India, and South Korea.

webofknowledgeAs the global distribution of Web of Science expands into virtually every region on earth, the importance of regional scholarship to our emerging regional user community also grows. Our approach to regional scholarship effectively extends the scope of the Thomson Reuters Journal Selection Process beyond the collection of the great international journal literature: it now moves into the realm of the regional journal literature. Its renewed purpose is to identify, evaluate, and select those scholarly journals that target a regional rather than an international audience. Bringing the best of these regional titles into the Web of Science will illuminate regional studies that would otherwise not have been visible to the broader international community of researchers.

In the Fall of 2006, the Editorial Development Department of Thomson Reuters began this monumental task. Under the direction of Maureen Handel, Manager of Journal Selection, the team of subject editors compiled a list of over 10,000 scholarly publications representing all areas of science, social science, the arts, and humanities. Over the next twelve months the team was able to select 700 regional journals for coverage in the Web of Science.

The Web of Science Regional Journal Profile

These regional journals are typically published outside the US or UK. Their content often centers on topics of regional interest or that are presented with a regional perspective. Authors may be largely from the region rather than an internationally diverse group. Bibliographic information is in English with the exception of some arts and humanities publications that are by definition in native language (e.g. literature studies). Cited references must be in the Roman alphabet. All journals selected are publishing on time and are formally peer reviewed. Citation analysis may be applied but the real importance of the regional journal is measured by the specificity of its content rather than its citation impact.

Subject Areas and Their Characteristics

These first 700 journals selected in 2007 included 161 Social Science titles, 148 Clinical Medicine titles, 108 Agriculture/Biology/Environmental Science titles, 95 Physics/Chemistry/Earth Science titles, 89 Engineering/Computing/Technology titles, 61 Arts/Humanities titles, and 38 Life Sciences titles. The editors’ exploration of each subject area surfaced hidden treasure.

Social Sciences:
The European Union and Asia Pacific regions yielded over 140 social science titles. Subject areas such as business, economics, management, and education have been enriched with regional coverage. Several fine law journals have been selected and will provide balance in an area normally dominated by US journals. Because of the characteristically regional nature of many studies in the social sciences, this area will provide a rich source of coverage that would otherwise not be available to the broader international community.

Clinical Medicine:
Several regional journals dealing with General Medicine, Cardiology, and Orthopedics have been selected. Latin America, Asia Pacific, and European Union are all well represented here. Research in Surgery is a growing area in regional journals. Robotic and other novel surgical technology is no longer limited to the developed nations but now originates in China and India as well and has potential use internationally.

The spread of diseases such as bird flu and SARS eastward and westward from Southeast Asia is a high interest topic regionally and internationally. In some cases host countries develop defensive practices and, if enough time elapses, vaccines. Regional studies on these critical subjects will now be available in Web of Science.

Agriculture/Biology/Environmental Sciences:
Many of the selected regional titles in this area include new or endemic taxa of interest globally. Likewise regional agriculture or environmental issues are now known to result in global consequences. Many titles are devoted to niche topics such as polar/tundra environment issues, or tropical agronomy. Desertification has heightened the value of literature from central Asian countries. Iranian journals report voluminously on the use of native, desert tolerant plants and animals that may soon be in demand by desertification threatened countries.

Physics/Chemistry/Earth Sciences:
Regional journals focused on various aspects of Earth Science are now available in Web of Science. These include titles focused on geology, geography, oceanography, meteorology, climatology, paleontology, remote sensing, and geomorphology. Again, the inherently regional nature of these studies provides a unique view of the subject and brings forward studies heretofore hidden.

Engineering/Computing/Technology:
Engineering is a subject of global interest. Regional Journals in this area typically present subject matter as researched by regional authors for their local audience. Civil and Mechanical Engineering studies are well represented, providing solutions to engineering problems arising from local geological, social, environmental, climatological, or economic factors.

Arts & Humanities:
The already deep coverage of Arts & Humanities in Web of Science is now enhanced by additional regional publications focused on such subjects as History, Linguistics, Archaeology, and Religion. Journals from countries in the European Union, Latin American, Africa, and Asia Pacific regions are included.

Life Sciences:
Life Sciences subject areas lending themselves to regional studies include parasitology, micro-biology, and pharmacology. A specific example of valuable regional activity is stem cell research. The illegality of stem cell studies in an increasing number of developed countries has moved the research to various Asian countries where it is of great interest inside and outside of the region.

Conclusion

The primary mission of the Journal Selection Process is to identify, evaluate and select the top tier international and regional journals for coverage in the Web of Science. These are the journals that have the greatest potential to advance research on a given topic. In the pursuit of this goal Thomson Reuters has partnered with many publishers and societies worldwide in the development of their publications. As an important by-product of the steady application of the Journal Selection Process, Thomson Reuters is actively involved in raising the level of research communication as presented in journals. The objective standards described in the Journal Selection Process will now be focused directly on a new and expansive body of literature. Our hope, therefore, is not only to enrich the editorial content of Web of Science, but also to expand relations with the world’s primary publishers in the achievement of our mutual goal: more effective communication of scientific results to the communities we serve.

James Testa

Author’s note: This essay was compiled by James Testa, Senior Director, Editorial Development & Publisher Relations. Special thanks to Editorial Development staff members Maureen Handel, Mariana Boletta, Rodney Chonka, Lauren Gala, Anne Marie Hinds, Katherine Junkins-Baumgartner, Chang Liu, Kathleen Michael, Luisa Rojo, and, Nancy Thornton for their critical reading and comments.

European ambitions: towards a ‘multi-dimensional global university ranking’

Further to our recent entries on European reactions and activities in relationship to global rankings schemes:

and a forthcoming guest contribution to SHIFTmag: Europe Talks to Brussels, ranking(s) watchers should examine this new tender for a €1,100,000 (maximum) contract for the ‘Design and testing the feasibility of a Multi-dimensional Global University Ranking’, to be completed by 2011.

dgecThe Terms of Reference, which hs been issued by the European Commission, Directorate-General for Education and Culture, is particularly insightful, while this summary conveys the broad objectives of the initiative:

The new ranking to be designed and tested would aim to make it possible to compare and benchmark similar institutions within and outside the EU, both at the level of the institution as a whole and focusing on different study fields. This would help institutions to better position themselves and improve their development strategies, quality and performances. Accessible, transparent and comparable information will make it easier for stakeholders and, in particular, students to make informed choices between the different institutions and their programmes. Many existing rankings do not fulfil this purpose because they only focus on certain aspects of research and on entire institutions, rather than on individual programmes and disciplines. The project will cover all types of universities and other higher education institutions as well as research institutes.

The funding is derived out of the Lifelong Learning policy and program stream of the Commission.

Thus we see a shift, in Europe, towards the implementation of an alternative scheme to the two main global ranking schemes, supported by substantial state resources at a regional level. It will be interesting to see how this eventual scheme complements and/or overturns the other global ranking schemes that are products of media outlets, private firms, and Chinese universities.

Kris Olds

Multi-scalar governance technologies vs recurring revenue: the dual logics of the rankings phenomenon

Our most recent entry (‘University Systems Ranking (USR)’: an alternative ranking framework from EU think-tank‘) is getting heavy traffic these days, a sign that the rankings phenomenon just won’t go away.  Indeed there is every sign that debates about rankings will be heating up over the next 1-2 year in particular, courtesy of the desire of stakeholders to better understand rankings, generate ‘recurring revenue’ off of rankings, and provide new governance technologies to restructure higher education and research systems.

This said I continue to be struck, as I travel to selective parts of the world for work, by the diversity of scalar emphases at play.

eiffeleu1In France, for example, the broad discourse about rankings elevates the importance of the national (i.e., French) and regional (i.e., European) scales, and only then does the university scale (which I will refer to as the institutional scale in this entry) come into play in importance terms. This situation reflects the strong role of the national state in governing and funding France’s higher education system, and France’s role in European development debates (including, at the moment, presidency of the Council of the European Union).

In UK it is the disciplinary/field and then the institutional scales that matter most, with the institutional made up of a long list of ranked disciplines/fields. Once the new Research Assessment Exercise (RAE) comes out in late 2008 we will see the institutional assess the position of each of their disciplines/fields, which will then lead to more support or relatively rapid allocation of the hatchet at the disciplinary/field level. This is in part because much national government funding (via the Higher Education Funding Council for England (HEFCE), the Scottish Funding Council (SFC), the Higher Education Funding Council for Wales (HEFCW) and the Department for Employment and Learning, Northern Ireland (DEL)) to each university is structurally dependent upon the relative rankings of each university’s position in the RAE, which is the aggregate effect of the position of the array of fields/disciplines in any one university (see this list from the University of Manchester for an example). The UK is, of course, concerned about its relative place in the two main global ranking schemes, but it doing well at the moment so the scale of concern is of a lower order than most other countries (including all other European countries). Credit rating agencies also assess and factor in rankings with respect to UK universities (e.g. see ‘Passing judgment’: the role of credit rating agencies in the global governance of UK universities‘).

In the US – supposedly the most marketized of contexts – there is highly variably concern with rankings.  Disciplines/fields ranked by media outlets like U.S. News & World Report are concerned, to be sure, but U.S. News & World Report does not allocate funding. Even the National Research Council (NRC) rankings matter less in the USA given that its effects (assuming it eventually comes out following multiple delays) are more diffuse. The NRC rankings are taken note of by deans and other senior administrators, and also faculty, albeit selectively. Again, there is no higher education system in the US – there are systems. I’ve worked in Singapore, England and the US as a faculty member and the US is by far the least addled or concerned by ranking systems, for good and for bad.

While the diversity of ranking dispositions at the national and institutional levels is heterogeneous in nature, the global rankings landscape is continuing to change, and quickly. In the remainder of this entry we’ll profile but two dimensions of the changes.

Anglo-American media networks and recurrent revenue

ustheFirst, new key media networks, largely Anglo-American private sector networks, have become intertwined.  As Inside Higher Ed put it on 24 November:

U.S. News & World Report on Friday announced a new, worldwide set of university rankings — which is really a repackaging of the international rankings produced this year in the Times Higher Education-QS World University Rankings. In some cases, U.S. News is arranging the rankings in different ways, but Robert Morse, director of rankings at the magazine, said that all data and the methodology were straight from the Times Higher’s rankings project, which is affiliated with the British publication about higher education. Asked if his magazine was just paying for reprint rights, Morse declined to discuss financial arrangements. But he said that it made sense for the magazine to look beyond the United States. “There is worldwide competition for the best faculty, best students and best research grants and researchers,” he said. He also said that, in the future, U.S. News may be involved in the methodology. Lloyd Thacker, founder of the Education Conservancy and a leading critic of U.S. News rankings, said of the magazine’s latest project: “The expansion of a business model that has profited at the expense of education is not surprising. This could challenge leaders to distinguish American higher education by providing better indicators of quality and by helping us think beyond ranking.”

This is an unexpected initiative, in some ways, given that the Times Higher Education-QS World University Rankings are already available on line and US New and World Report is simply repackaging these for sale in the American market. Yet if you adopt a market-making perspective this joint venture makes perfect sense. Annual versions of the Times Higher Education-QS World University Rankings will be reprinted in a familiar (to US readers) format, thereby enabling London-based TSL Education Ltd., London/Paris/Singapore-based QS Quacquarelli Symonds, and Washington DC-based U.S. News and World Report to generate recurring revenue with little new effort (apart from repackaging and distribution in the US). The enabling mechanism is, in this case, reprint rights fees. As we have noted before, this is a niche industry in formation, indeed.

More European angst and action

And second, at the regional level, European angst (an issue we profiled on 6 July in ‘Euro angsts, insights and actions regarding global university ranking schemes‘) about the nature and impact of rankings is leading to the production of critical reports on rankings methodologies, the sponsorship of high powered multi-stakeholder workshops, and the emergence of new proposals for European ranking schemes.

ecjrccoverSee, for example, this newly released report on rankings titled Higher Education Rankings: Robustness Issues and Critical Assessment, which is published by the European Commission Joint Research Centre, Institute for the Protection and Security of the Citizen, Centre for Research on Lifelong Learning (CRELL)

The press release is here, and a detailed abstract of the report is below:

The Academic Ranking of World Universities carried out annually by the Shanghai’s Jiao Tong University (mostly known as the ‘Shanghai ranking’) has become, beyond the intention of its developers, a reference for scholars and policy makers in the field of higher education. For example Aghion and co-workers at the Bruegel think tank use the index – together with other data collected by Bruegel researchers – for analysis of how to reform Europe’s universities, while French President Sarkozy has stressed the need for French universities to consolidate in order to promote their ranking under Jiao Tong. Given the political importance of this field the preparation of a new university ranking system is being considered by the French ministry of education.

The questions addressed in the present analysis is whether the Jiao Tong ranking serves the purposes it is used for, and whether its immediate European alternative, the British THES, can do better.

Robustness analysis of the Jiao Tong and THES ranking carried out by JRC researchers, and of an ad hoc created Jiao Tong-THES hybrid, shows that both measures fail when it comes to assessing Europe’s universities. Jiao Tong is only robust in the identification of the top performers, on either side of the Atlantic, but quite unreliable on the ordering of all other institutes. Furthermore Jiao Tong focuses only on the research performance of universities, and hence is based on the strong assumption that research is a universal proxy for education. THES is a step in the right direction in that it includes some measure of education quality, but is otherwise fragile in its ranking, undeniably biased towards British institutes and somehow inconsistent in the relation between subjective variables (from surveys) and objective data (e.g. citations).

JRC analysis is based on 88 universities for which both the THES and Jiao Tong rank were available. European universities covered by the present study thus constitute only about 0.5% of the population of Europe’s universities. Yet the fact that we are unable to reliably rank even the best European universities (apart from the 5 at the top) is a strong call for a better system, whose need is made acute by today’s policy focus on the reform of higher education. For most European students, teachers or researchers not even the Shanghai ranking – taken at face value and leaving aside the reservations raised in the present study – would tell which university is best in their own country. This is a problem for Europe, committed to make its education more comparable, its students more mobile and its researchers part of a European Research Area.

Various attempts in EU countries to address the issue of assessing higher education performance are briefly reviewed in the present study, which offers elements of analysis of which measurement problem could be addressed at the EU scale. [my emphasis]

While ostensibly “European”, does it really matter that the Times Higher Education-QS World University Ranking is produced by firms with European headquarters, while the Jiao Tong ranking is produced by an institution based in China?

The divergent logics underlying the production of discourses about rankings are also clearly visible in two related statements. At the bottom of the European Commission’s Joint Research Centre report summarized above we see “Reproduction is authorised provided the source is acknowledged”, while the Times Higher Education-QS World University Rankings, a market-making discourse, is accompanied by a lengthy copyright warning that can be viewed here.

Yet do not, for a minute, think that ‘Europe’ does not want to be ranked, or use rankings, as much if not more than any Asian or American or Australian institution. At a disciplinary/field level, for example, debates are quickly unfolding about the European Reference Index for the Humanities (ERIH), a European Science Foundation (ESF) backed initiative that has its origins in deliberations about the role of the humanities in the European Research Area. The ESF frames it this way:

Humanities research in Europe is multifaceted and rich in lively national, linguistic and intellectual traditions. Much of Europe’s Humanities scholarship is known to be first rate. However, there are specifities of Humanities research, that can make it difficult to assess and compare with other sciences. Also,  it is not possible to accurately apply to the Humanities assessment tools used to evaluate other types of research. As the transnational mobility of researchers continues to increase, so too does the transdisciplinarity of contemporary science. Humanities researchers must position themselves in changing international contexts and need a tool that offers benchmarking. This is why ERIH (European Reference Index for the Humanities) aims initially to identify, and gain more visibility for top-quality European Humanities research published in academic journals in, potentially, all European languages. It is a fully peer-reviewed, Europe-wide process, in which 15 expert panels sift and aggregate input received from funding agencies, subject associations and specialist research centres across the continent. In addition to being a reference index of the top journals in 15 areas of the Humanities, across the continent and beyond, it is intended that ERIH will be extended to include book-form publications and non-traditional formats. It is also intended that ERIH will form the backbone of a fully-fledged research information system for the Humanities.

See here for a defense of this ranking system by Michael Worton (Vice-Provost, University College London, and a member of the ERIH steering committee).  I was particularly struck by this comment:

However, the aim of the ERIH is not to assess the quality of individual outputs but to assess dissemination and impact. It can therefore provide something that the RAE cannot: it can be used for aggregate benchmarking of national research systems to determine the international standing of research carried out in a particular discipline in a particular country.

Link here for a Google weblog search on this debate, while a recent Chronicle of Higher Education article (‘New Ratings of Humanities Journals Do More Than Rank — They Rankle’) is also worth reviewing.

Thus we see a new rankings initiative emerging to enable (in theory) Europe to better codify its highly developed humanities presence on the global research landscape, but in a way that will enable national (at the intra-European scale) peaks (and presumably) valleys of quality output to be mapped for the humanities, but also for specific disciplines/fields. Imagine the governance opportunities available, at multiple scales, if this scheme is operationalized.

And finally, at the European scale again, University World News noted, on 23 November, that:

The European Union is planning to launch its own international higher education rankings, with emphasis on helping students make informed choices about where to study and encouraging their mobility. Odile Quintin, the European Commission’s Director-General of Education and Culture, announced she would call for proposals before the end of the year, with the first classification appearing in 2010.

A European classification would probably be compiled along the same lines as the German Centre for Higher Education Development Excellence Ranking.

European actors are being spurred into such action by multiple forces, some internal (including the perceived need to ‘modernize European universities in the context of Lisbon and the European Research Area), some external (Shanghai Jiao Tong; Times Higher QS), and some of a global dimension (e.g., audit culture; competition for mobile students).

eurankingsprogThis latest push is also due to the French presidency of the Council of the European Union, as noted above, which is facilitating action at the regional and national scales. See, for example, details on a Paris-based conference titled ‘International comparison of education systems: a european model?’ which was held on 13-14 November 2008. As noted in the programme, the:

objective of the conference is to bring to the fore the strengths and weaknesses of the different international and European education systems, while highlighting the need for regular and objective assessment of the reforms undertaken by European Member States by means of appropriate indicators. It will notably assist in taking stock of:
– the current state and performance of the different European education systems:
– the ability of the different European education systems to curb the rate of failure in schools,
– the relative effectiveness of amounts spent on education by the different Member States.

The programme and list of speakers is worth perusing to acquire a sense of the broad agenda being put forward.

Multi-scalar governance vs (?) recurring revenue: the emerging dual logics of the rankings phenomenon

The rankings phenomenon is here to stay. But which logics will prevail, or at least emerge as the most important in shaping the extension of audit culture into the spheres of higher education and research?  At the moment it appears that the two main logics are:

  • Creating a new niche industry to form markets and generate recurrent revenue; and,
  • Creating new multi-scalar governance technologies to open up previously opaque higher education and research systems, so as to facilitate strategic restructuring for the knowledge economy.

These dual logics are in some ways contradictory, yet in other ways they are interdependent. This is a phenomenon that also has deep roots in the emerging centres of global higher ed and research calculation that are situated in London, Shanghai, New York, Brussels, and Washington DC.  And it is underpinned by the analytical cum revenue generating technologies provided by the Scientific division of Thomson Reuters, which develops and operates the ISI Web of Knowledge.

Market-making and governance enabling…and all unfolding before our very eyes. Yet do we really know enough about the nature of the unfolding process, including the present and absent voices, that seems to be bringing these logics to the fore?

Kris Olds

‘Passing judgment’: the role of credit rating agencies in the global governance of UK universities

This week, one of the two major credit rating agencies in the world, Standard & Poor’s (Moody’s is the other), issued their annual ‘Report Card’ on UK universities. This year’s version is titled UK Universities Enjoy Higher Revenues but Still Face Spending Pressures and it has received a fair bit of attention in media outlets (e.g., the Financial Times and The Guardian). Our thanks to Standard and Poor’s for sending us a copy of the report.

Five UK universities were in the spotlight after having their creditworthiness rated by Standard & Poor’s (S&P’s). In total, S&P’s assesses 20 universities in the UK (5 are made public, the rest are confidential), with 90% of this survey considered by the rating agency to be of high investment grade quality (of A- or above).

Universities in the UK, it would appear from S&P’s Report Card, have had a relatively good year from ‘a credit perspective’. This pronouncement is surely something to celebrate in a year when the word ‘credit crunch’ has become the new metaphor for economic meltdown, and when higher education institutions are likely to be worried about the affects of the sub-prime mortgage lending crisis on loans to students and institutions more generally.

But to the average lay person (or even the average university professor), with a generally low level of financial literacy, what does this all mean? Global ratings agencies passing judgments on UK universities, or policies to drive the sector more generally, or, finally, individual institutional governance decisions?

Three years ago, when one of us (Susan) was delivering an Inaugural Professorial Address at Bristol, S&P’s 2005 report on Bristol (AA/Stable/–) was flashed up, much to the amusement of the audience though to the bemusement of the Chair, a senior university leader. The mild embarrassment of the Chair was largely a consequence of the fact that he was unaware of this judgment on Bristol by a credit rating agency headquartered in New York.

Now the reason for showing S&P’s judgment on the University of Bristol was neither to amuse the audience nor to embarrass the Chair. The point at the time was to sketch out the changing landscape of globalizing education systems within the wider global political economy, to introduce some of the newer (and more private) players who increasingly wield policymaking/shaping power on the sector, to reflect on how these agencies work, and to delineate some of the emerging effects of such developments on the sector.

Our view is that current analyses of globalizing higher education have neglected the role of credit rating agencies in the governance of the higher education sector—as specialized forms of intelligence gathering, shaping and judgment determination on universities. Yet, credit rating agencies are, in many ways, at the heart of contemporary global governance. Witness, for example, the huge debates going on now about establishing a European register for ratings agencies.

The release, then, this week of the S&P’s UK Universities 2008 Report Card, is an opportunity for GlobalHigherEd to sketch out to interested readers a basic understanding of global rating agencies and their relationship to the global governance of higher education.

Rating agencies – origins

Timothy Sinclair, a University of Warwick academic, has been writing for more than a decade on rating agencies and their roles in what he calls the New Global Finance (NGF) (Sinclair, 2000). His various articles and books (see, for example, Sinclair 1994; 2000; 2003; 2005)—some of which are listed below—are worth reading for those of you who want to pursue the topic in greater depth.

Sinclair outlines the early development and subsequent growing importance of credit rating agencies—the masters of capital and second superpowers—arguing that there have been a number of distinct phases in their development.

The first phase dates back to the 1850s, when compendiums of information were produced for American financial markets about large industrial infrastructure developments, such as railroads and canals. However, it was not until the 1907 financial crisis that these early compendiums of information were then used to make judgements about the creditworthiness of debtors (Sinclair, 2003: 148).

‘Rating’ then entered a period of rapid growth from the mid-1930s onwards, as a result of state governments in the US incorporating rating standards into their prudential rules for investment by pension funds.

A third phase began in the 1980s, when new financial innovations (particularly low-rated or junk bonds) were developed, and cheaper offshore non-national money markets were created (that is, places where funds are raised by selling debt obligations and equity outside of the current constraints of government regulation).

However this process, of what Sinclair (1994: 136) calls the ‘disintermediation’ of financing (meaning state regulatory bodies are side-stepped), creates information problems for those wishing to lend money and those wishing to borrow it.

The current phase is now characterized by, on the one hand, greater internationalization of finance, and on the other hand hand, increased significance of capital markets that challenge the role of Banks, as intermediaries.

Credit rating agencies have, as a result, become more important as suppliers of the information with which to make credit-worthiness judgments.

New York-based rating agencies have grown rapidly since then, responding to innovations in financial instruments, on the one hand, and the need for information, on the other. Demand for information has also generated competition within the industry, with some firms operating niche specializations – for instance, as we see with Standards & Poor’s and the higher education sector, itself a subsidiary of publishers McGraw Hill,

Credit rating is big, big business. As Sinclair (2005) notes, the two major credit rating agencies, Moody’s and Standards & Poor’s, pass judgments on around a $30 trillion worth of securities each year. Ratings also affect rates or costs of borrowing, so that the higher the rating, the less risk of default on repayment to the lender and therefore the lower the cost to the borrower.

Universities with different credit ratings will, therefore, be differently placed to borrow – so that the adage of ‘the more you have the more you get’ becomes a major theme.

The rating process

If we look at the detail of the ‘issuer credit rating’ and ‘comments’ in the Report Card of, for instance, the University of Bristol, or King’s College London, we can see that detail is gathered on the financial rating of the issuer; on the industry, competitors, and economy; on legal advice related to the specific issue; on management, policy, business outlook, accounting practices and so on; and on the competitive position, quality of management, long term industry prospects, and wider economic environment. As Sinclair (2003: 150) notes:

The rating agencies are most interested in data on cash flow relative to debt service obligations. They want to know how liquid the company is, and where there will be timely problems likely to hinder repayment. Other information may include five-year financial projections, including income statements and balance sheets, analysis of capital spending plans, financing alternatives, and contingency plans. This information which may not be publicly known is supplemented by agency research into the value of current outstanding obligations, stock valuations and other publicly available data that allows for an inference…

The rating that follows – an opinion on creditworthiness—is generated by an analytical team, a report is prepared with the rating and rationale, this is put to the rating committee made up of senior officials, and a final determination is made in private. The decision is subject to appeal by the issuer. Issuer credit ratings can be either long or short term. S&P use the following nomenclature for long term issue credit ratings (see Bankers Almanac, 2008: 1- 3):

  • AAA – (highest/ extremely strong capacity to meet financial commitments
  • AA – very strong capacity to meet financial commitments
  • A – strong capacity to meet financial commitments, but susceptible to adverse affects of changes in circumstances and economic conditions
  • BBB – adequate capacity to meet financial commitments
  • BB – less vulnerable in the near term than other lower rated obligators, but faces major ongoing uncertainties
  • B – more vulnerable than BB – but adverse business, financial or economic conditions will likely impair obligator’s capacity to meet its financial commitments

Rating higher education institutions

In light of the above discussion, we can now look more closely at the kinds of judgments passed on those universities included in a typical Report Card on the sector by Standards & Poor’s (see 2008: 7).

The 2008 Report Card itself is short; a 9 page document which offers a ‘credit perspective’ on the sector more generally, and on 5 universities. We are told “the UK higher education sector has made positive strides over the past few years, but faces increasing risks in the medium-to-long term” (p. 2).

The Report goes on to note a trebling of tuition fees in the UK, the growth the overseas student market and associated income, an increase in research income for research intensive universities – so that of the 5 universities rated, 1 has been upgraded, another has had its outlook revised to ‘positive’, and no ratings were adjusted for the other three.

The Report also notes (p. 2) that the universities publicly rated by S&P’s are among the leading universities in the UK. To support this claim they refer to another ranking mechanism that is now providing information in the global marketplace – The Times Higher QS World Universities Rankings 2007, which is, as we have noted in a recent entry (‘Euro angsts‘), receiving considerable critical attention in Europe.

However, the Report Card also notes pressures within the system: higher wage demands linked to tuition increases, the search for new researchers to be counted as part of the UK’s Research Assessment Exercise (RAE), global competition for international students, and the heightened expectations of students for better infrastructure as a result of higher fees.

Longer term risks include the fact that by 2020, there will be 16% fewer 18 year olds coming through the system, according to forecasts by Universities UK – with the biggest impact being on the newer universities (in the UK these so-called ‘newer universities’ are previous polytechnics who were given university status in 1992).

Of the 20 UK universities rated in this S&P’s Report, 4 universities are rated AAA; 8 are rated AA; 6 are rated A, and 2 are rated BBB. The University of Bristol, as we can see from the analysts’ rating and comments which we have reproduced below, is given a relatively favorable rating. We have also quoted this rating at length to give you a sense of the kind of commentary made and how this relates to the judgment passed.


Credit rating agencies, as instruments of the global governance of higher education

Credit rating agencies are particularly powerful because both markets and governments see them as authoritative sources of judgment, with the result that they are major actors in controlling access to capital markets. And despite the evident importance of credit rating agencies on the governance of universities in the UK and elsewhere, there is a remarkable lack of attention to this phenomenon. We think there are important questions that need to be researched and the results discussed more widely. For example:

  • How widely spread is the practice?
  • Why are some universities rated whilst others are not?
  • Why are some universities’ ratings considered confidential whilst others are not (keeping in mind that they are all, in the above UK case, public taxpayer supported universities)?
  • Have any universities contested their credit rating, and if so, through what process, and with what outcome?
  • How do university’s management systems respond to these credit ratings, and in what ways might they influence ongoing policy decisions within the university and within the sector?
  • How robust are particular kinds of reputational or status ‘information’, such as World University Rankings, especially if we are looking at creditworthiness?

Our reports on these global rankings show that there are major problems with such measures. As we have profiled, and as has University Ranking Watch and the Beerkens’ Blog, there are clearly unresolved debates and major problems with global ranking schemes.

Clearly market liberalism, of the kind that has characterized this current period of globalization, requires new kinds of intermediaries to provide information for both buyer and seller. And it cannot hurt to have ‘outside’ assessments of the fiscal health of institutions (in this case universities) that are complex, often opaque, and taxpayer supported. However, to experts like Timothy Sinclair (2003), credit rating agencies privatize policymaking, and they can narrow the sphere of government intervention.

For EU Internal Market Commissioner, Charlie McCreevy, the credit ratings agencies like Moody’s and S&P’s contributed to the current financial market turmoil because they underestimated the risks related to their structured credit products. As the Commissioner commented in EurActiv in June.: “No supervisor appears to have got as much as a sniff of the rot at the heart of the structured finance rating process before it all blew up.”

In other words, credit rating agencies lack political accountability and enjoy an ‘accountability gap’. And while efforts are now under way by regulators to close that gap by developing new regulatory frameworks and rules, analysts worry that these private actors will now find new ways around the rules, and in turn facilitate the creation of a riskier financial architecture (as happened with global mortgage markets).

As universities become more financialized, as well as ranked, indexed and barometered in the ways we have been mapping on GlobalHigherEd, such ‘information’ on the sector will also likely be deployed to pass judgment and generate ratings and rankings of ‘creditworthiness’ for universities. The net effect may well be to exaggerate the differences between institutions, to generate greater levels of uneven development within and across the sector, and to increase rather then decrease the opacity and therefore accountability of the sector.

In sum, there is little doubt credit rating agencies, in passing judgments, play a key and increasingly important role in the global governance of higher education. It is also clear from these developments that we need to pay much closer attention to what might be thought of as mundane entities – credit rating agencies – and their role in the global governance of higher education. And we are also hopeful that credit ratings agencies will outline their views on this important dimension of the small g governance of higher education institutions.

Selected References

Bankers Almanac (2008) Standards and Poor’s Definitions, last accessed 5 August 2008.

King, M. and Sinclair, T. (2003) Private actors and public policy: a requiem for the new Basel Capital Accord, International Political Science Review, 24 (3), pp. 345-62.

Sinclair, T. (1994) Passing judgement: credit rating processes as regulatory mechanisms of governance in the emerging world order, Review of International Political Economy, 1 (1), pp. 133-159.

Sinclair, T. (2000) Reinventing authority: embedded knowledge networks and the new global finance, Environment and Planning C: Government and Policy, August 18 (4), pp. 487-502.

Sinclair, T. (2003) Global monitor: bond rating agencies, New Political Economy, 8 (1), pp. 147-161.

Sinclair, T. (2005) The New Masters of Capital: American Bond Rating Agencies and the Politics of Creditworthiness, New York: Cornell University Press.

Standard & Poor’s (2008) Report Card: UK Universities Enjoy Higher Revenues But Still Face Spending Pressures, London: Standards & Poor’s.

Susan Robertson and Kris Olds

Euro angsts, insights and actions regarding global university ranking schemes

The Beerkens’ blog noted, on 1 July, how the university rankings effect has even gone as far as reshaping immigration policy in the Netherlands. He included this extract, from a government policy proposal (‘Blueprint for a modern migration policy’):

Migrants are eligible if they received their degree from a university that is in the top 150 of two international league tables of universities. Because of the overlap, the lists consists of 189 universities…

Quite the authority being vetted in ranking schemes that are still in the process of being hotly debated!

On this broad topic, I’ve been traveling throughout Europe this academic year, pursuing a project not related to rankings, yet again and again rankings come up as a topic of discussion, reminding us of the de-facto global governance power of rankings (and the rankers). Ranking schemes, especially the Shanghai Jiao Tong University’s Academic Ranking of World Universities, and The Times Higher-QS World University Rankings are generating both governance impacts, and substantial anxiety, in multiple quarters.

In response, the European Commission is funding some research and thinking on the topic, while France’s new role in the rotating EU Presidency is supposed to lead to some further focus and attention over the next six months. More generally, here is a random list of European or Europe-based initiatives to examine the nature, impacts, and politics of global rankings:

And here are some recent or forthcoming events:

Yet I can’t help but wonder why Europe, which generally has high quality universities, despite some significant challenges, did not seek to shed light on the pros and cons of the rankings phenomenon any earlier. In other words, despite the critical mass of brainpower in Europe, what has hindered a collective, integrated, and well-funded interrogation of the ranking schemes from emerging before the ranking effects and path dependency started to take hold? Of course there was plenty of muttering, and some early research about rankings, and one could argue that I am viewing this topic through a rear view mirror, but Europe was, arguably, somewhat late in digging into this topic considering how much of an impact these assessment cum governance schemes are having.

So, if absence matters as much as presence in the global higher ed world, let’s ponder the absence of a serious European critique, or at least interrogation of, rankings and the rankers, until now. Let me put forward four possible explanations.

First, action at a European higher education scale has been focused upon bringing the European Higher Education Area to life via the Bologna Process, which was formally initiated in 1999. Thus there were only so many resources – intellectual and material – that could be allocated to higher education, so the Europeans are only now looking outwards to the power of rankings and the rankers. In short, key actors with a European higher education and research development vision have simply been too busy to focus on the rankings phenomenon and its effects.

A second explanation might be that European stakeholders are, deep down, profoundly uneasy about competition with respect to higher education, of which benchmarking and ranking is a part. But, as the Dublin Institute of Technology’s Ellen Hazelkorn notes in Australia’s Campus Review (27 May 2008):

Rankings are the latest weapon in the battle for world-class excellence. They are a manifestation of escalating global competition and the geopolitical search for talent, and are now a driver of that competition and a metaphor for the reputation race. What started out as an innocuous consumer product – aimed at undergraduate domestic students – has become a policy instrument, a management tool, and a transmitter of social, cultural and professional capital for the faculty and students who attend high-ranked institutions….

In the post-massification higher education world, rankings are widening the gap between elite and mass education, exacerbating the international division of knowledge. They inflate the academic arms race, locking institutions and governments into a continual quest for ever increasing resources which most countries cannot afford without sacrificing other social and economic policies. Should institutions and governments allow their higher education policy to be driven by metrics developed by others for another purpose?

It is worth noting that Ellen Hazelkorn is currently finishing an OECD-sponsored study on the effects of rankings.

In short, institutions associated with European higher education did not know how to assertively critique (or at least interrogate) ranking schemes as they never realized, until more recently, how ranking schemes are deeply geopolitical and geoeconomic vehicles that enable the powerful to maintain their standing, and harness yet even more resources inward. Angst regarding competition dulled senses to the intrinsically competitive logic of global university ranking schemes, and the political nature of their being.

Third, perhaps European elites, infatuated as they are with US Ivy League universities, or private institutions like Stanford, just accepted the schemes for the results summarized in this table from an OECD working paper (July 2007) written by Simon Marginson and Marijk van der Wende:

for they merely reinforced their acceptance of one form of American exceptionalism that has been acknowledged in Europe for some time. In other words, can one expect critiques of schemes that identify and peg, at the top, universities that many European elites would kill to send their children to, to emerge? I’m not so sure. As with Asia (where I worked from 1997-2001), and now in Europe, people seem infatuated with the standing of universities like Harvard, MIT, and Princeton, but these universities really operate in a parallel universe. Unless European governments, or the EU, are willing to establish 2-3 universities like King Abdullah University of Science and Technology (KAUST) in Saudi Arabia recently did with a $10 billion endowment, then angling to compete with the US privates should just be forgotten about. The new European Institute of Innovation and Technology (EIT) innovative as it may become, will not rearrange the rankings results, assuming they should indeed be rearranged.

Following what could be defined as a fait accompli phase, national and European political leaders came to progressively view the low status of European universities in the two key rankings schemes – Shanghai, and Times Higher – as a problematic situation. Why? The Lisbon Strategy emerges in 2000, was relaunched in 2005, and slowly starts to generate impacts, while also being continually retuned. Thus, if the strategy is to “become the most competitive and dynamic knowledge-based economy in the world, capable of sustainable economic growth with more and better jobs and greater social cohesion”, how can Europe become such a competitive global force when universities – key knowledge producers – are so far off fast emerging and now hegemonic global knowledge production maps?

In this political context, especially given state control over higher education budgets, and the relaunched Lisbon agenda drive, Europe’s rankers of ranking schemes were then propelled into action, in trebuchet-like fashion. 2010 is, after all, a key target date for a myriad of European scale assessments.

Fourth, Europe includes the UK, despite the feelings of many on both sides of the Channel. Powerful and well-respected institutions, with a wealth of analytical resources, are based in the UK, the global centre of calculation regarding bibliometrics (which rankings are a part of). Yet what role have universities like Oxford, Cambridge, Imperial College, UCL, and so on, or stakeholder organizations like Universities UK (UUK) and the Higher Education Funding Council for England (HEFCE), played in shedding light on the pros and cons of rankings for European institutions of higher education? I might be uninformed but the critiques are not emerging from the well placed, despite their immense experience with bibliometrics. In short as rankings aggregate data that works at a level of abstraction that hoves universities into view, and places UK universities highly (up there with Yale, Harvard and MIT), then these UK universities (or groups like UUK) will inevitably be concerned about their relative position, not the position of the broader regional system of which they are part, nor the rigour of the ranking methodologies. Interestingly, the vast majority of the above initiatives I listed only include representatives from universities that are ranked relatively low by the two main ranking schemes that now hold hegemonic power. I could also speculate on why the French contribution to the regional debate is limited, but will save that for another day.

These are but four of many possible explanations for why European higher education might have been relatively slow to grapple with the power and effects of university ranking schemes considering how much angst and impacts they generate. This said, you could argue, as Eric Beerkens has in the comments section below, that the European response was actually not late off the mark, despite what I argued above. The Shanghai rankings emerged in June 2003, and I still recall the attention they generated when they were first circulated. Three to five years for sustained action in some sectors is pretty quick, while in some sectors it is not.

In conclusion, it is clear that Europe has been destabilized by an immutable mobile – a regionally and now globally understood analytical device that holds together, travels across space, and is placed in reports, ministerial briefing notes, articles, PPT presentations, newspaper and magazine stories, etc. And it is only now that Europe is seriously interrogating the power of such devices, the data and methodologies that underly their production, and the global geopolitics and geoeconomics that they are part and parcel of.

I would argue that it is time to allocate substantial European resources to a deep, sustained, and ongoing analysis of the rankers, their ranking schemes, and associated effects. Questions remain, though, about how much light will be shed on the nature of university rankings schemes, what proposals or alternatives might emerge, and how the various currents of thought in Europe converge or diverge as some consensus is sought. Some institutions in Europe are actually happy that this ‘new reality’ has emerged for it is perceived to facilitate the ‘modernization’ of universities, enhance transparency at an intra-university scale, and elevate the role of the European Commission in European higher education development dynamics. Yet others equate rankings and classification schema with neoliberalism, commodification, and Americanization: this partly explains the ongoing critiques of the typology initiatives I linked to above, which are, to a degree, inspired by the German Excellence initiative, which is in turn partially inspired by a vision of what the US higher education system is.

Regardless, the rankings topic is not about to disappear. Let us hope that the controversies, debates, and research (current and future) inspire coordinated and rigorous European initiatives that will shed more light on this new form of defacto global governance. Why? If Europe does not do it, no one else will, at least in a manner that recognizes the diverse contributions that higher education can and should make to development processes at a range of scales.

Kris Olds

23 July update: see here for a review of a 2 juillet 2008 French Senate proposal to develop a new European ranking system that better reflects the nature of knowledge production (including language) in France and Europe more generally.  The full report (French only) can be downloaded here, while the press release (French only) can be read here.  France is, of course, going to publish a Senate report in French, though the likely target audience for the broader message (including a critique of the Shanghai Jiao Tong University’s Academic Ranking of World Universities) only partially understands French.  Yet in some ways it would have been better to have the report released simultaneously in both French and English.  But the contradictions of France critiquing dominant ranking schemes for their bias towards the English language, in English, was likely too much to take. In the end though, the French critique is well worth considering, and I can’t help but think that the EU or one of the many emerging initiatives above would be wise to have the report immediately translated and placed on some relevant websites so that it can be downloaded for review and debate.

Thomson Innovation, UK Research Footprints®, and global audit culture

Thomson Scientific, the private firm fueling the bibliometrics drive in academia, is in the process of positioning itself as the anchor point for data on intellectual property (IP) and research. Following tantalizers in the form of free reports such as World IP Today: A Thomson Scientific Report on Global Patent Activity from 1997-2006 (from which the two images below are taken), Thomson Scientific is establishing, in phases, Thomson Innovation, which will provide, when completed:

  • Comprehensive prior art searching with the ability to search patents and scientific literature simultaneously
  • Expanded Asian patent coverage, including translations of Japanese full-text and additional editorially enhanced abstracts of Chinese data
  • A fully integrated searchable database combining Derwent World Patent Index® (DWPISM) with full-text patent data to provide the most comprehensive patent records available
  • Support of strategic intellectual property decisions through:
    • powerful analysis and visualization tools, such as charting, citation mapping and search result ranking
    • and, integration of business and news resources
  • Enhanced collaboration capabilities, including customizable folder structures that enable users to organize, annotate, search and share relevant files.

thomsonpatent1.jpg

thomsonpatent2.jpg

Speaking of bibliometrics, Evidence Ltd., the private firm that is shaping some of the debates about the post-Research Assessment Exercise (RAE) system of evaluating research quality and impact in UK universities, recently released the UK Higher Education Research Yearbook 2007. This £255 (for higher education customers) report:

[P]rovides the means to gain a rapid overview of the research strengths of any UK Higher Education institution, and compare its performance with that of its peers. It is an invaluable tool for those wishing to assess their own institution’s areas of relative strength and weakness, as well as versatile directory for those looking to invest in UK research. It will save research offices in any organisation with R&D links many months of work, allowing administrative and management staff the opportunity to focus on the strategic priorities that these data will help to inform….

It sets out in clear diagrams and summary tables the research profile for Universities and Colleges funded for research. Research Footprints® compare each institution’s performance to the average for its sector, allowing strengths and weaknesses to be rapidly identified by research managers and by industrial customers.

See below, for one example of how a sample university (in this case the University of Warwick) has its “Research Footprint®” graphically represented. This image is included in a brief article about Warwick by Vice-Chancellor Nigel Thrift, and is available on Warwick’s News & Events website.

warwickfootprint.jpg

sasquatch.jpgGiven the metrics that are utilized, it is clear, even if the data is not published, that individual researchers’ footprints will be available for systematic and comparative analysis, thereby enabling the governance of faculty with the back-up of ‘data’, and the targeted recruitment of the ‘big foot’ wherever s/he resides (though Sasquatches presumably need not apply!).

Kris Olds