Governing world university rankers: an agenda for much needed reform

Is it now time to ensure that world university rankers are overseen, if not governed, so as to achieve better quality assessments of the differential contributions of universities in the global higher education and research landscape?

In this brief entry we make a case that something needs to be done about the system in which world university rankers operate. We have two brief points to make about why something needs to be done, and then we outline some options for moving beyond today’s status quo situation.

First, while both universities and rankers are all interested in how well universities are positioned in the emerging global higher education landscape, power over the process, as currently exercised, rests solely with the rankers. Clearly firms like QS and Times Higher Education are open to input, advice, and indeed critique, but in the end they, along with information services firms like Thomson Reuters, decide:

  • How the methodology is configured
  • How the methodology is implemented and vetted
  • When and how the rankings outcomes are released
  • Who is permitted access to the base data
  • When and how errors are corrected in rankings-related publications
  • What lessons are learned from errors
  • How the data is subsequently used

Rankers have authored the process, and universities (not to mention associations of universities, and ministries of education) have simply handed over the raw data. Observers of this process might be forgiven for thinking that universities have acquiesced to the rankers’ desires with remarkably little thought. How and why we’ve ended up in such a state of affairs is a fascinating (if not alarming) indicator of how fearful many universities are of being erased from increasingly mediatized viewpoints, and how slow universities and governments have been in adjusting to the globalization of higher education and research, including the desectoralization process. This situation has some parallels with the ways that ratings agencies (e.g., Standard and Poor’s or Moody’s) have been able to operate over the last several decades.

Second, and as has been noted in two of our recent entries:

the costs associated with providing rankers (especially QS and THE/Thomson Reuters) with data are increasing concentrated on universities.

On a related note, there is no rationale for the now annual rankings cycle that the rankers have been successfully been able to normalize. What really changes on a year-to-year basis apart from changes in ranking methodologies? Or, to paraphrase Macquarie University’s vice-chancellor, Steven Schwartz, in this Monday’s Sydney Morning Herald:

“I’ve never quite adjusted myself to the idea that universities can jump around from year to year like bungy jumpers,” he says.

”They’re like huge oil tankers; they take forever to turn around. Anybody who works in a university realises how little they change from year to year.”

Indeed if the rationale for an annual cycle of rankings were so obvious, government ministries would surely facilitate more annual assessment exercises. Even the most managerial and bibliometric-predisposed of governments anywhere – in the UK – has spaced its intense research assessment exercise out over a 4-6 year cycle. And yet the rankers have universities on the run. Why? Because this cycle facilitates data provision for commercial databases, and it enables increasingly competitive rankers to construct their own lucrative markets. This, perhaps, explains this 6 July 2010 reaction, from QS to a call for a four vs one year rankings cycle in GlobalHigherEd:

Thus we have a situation where rankers seeking to construct media/information service markets are driving up data provision time and costs for universities, facilitating continual change in methodologies, and as a matter of consequence generating some surreal swings in ranked positions. Signs abound that rankers are driving too hard, taking too many risks, while failing to respect universities, especially those outside of the upper echelon of the rank orders.

Assuming you agree that something should happen, the options for action are many. Given what we know about the rankers, and the universities that are ranked, we have developed four options, in no order of priority, to further discussion on this topic. Clearly there are other options, and we welcome alternative suggestions, as well as critiques of our ideas below.

The first option for action is the creation of an ad-hoc task force by 2-3 associations of universities located within several world regions, the International Association of Universities (IAU), and one or more international consortia of universities. Such an initiative could build on the work of the European University Association (EAU) which created a regionally-specific task force in early 2010. Following an agreement to halt world university rankings for two years (2011 & 2012), this new ad-hoc task force could commission a series of studies regarding the world university rankings phenomenon, not to mention the development of alternative options for assessing, benchmarking and comparing higher education performance and quality. In the end the current status quo regarding world university rankings could be sanctioned, but such an approach could just as easily lead to new approaches, new analytical instruments, and new concepts that might better shed light on the diverse impacts of contemporary universities.

A second option is an inter-governmental agreement about the conditions in which world university rankings can occur. This agreement could be forged in the context of bi-lateral relations between ministers in select countries: a US-UK agreement, for example, would ensure that the rankers reform their practices. A variation on this theme is an agreement of ministers of education (or their equivalent) in the context of the annual G8 University Summit (to be held in 2011), or the next Global Bologna Policy Forum (to be held in 2012) that will bring together 68+ ministers of education.

The third option for action is non-engagement, as in an organized boycott. This option would have to be pushed by one or more key associations of universities. The outcome of this strategy, assuming it is effective, is the shutdown of unique data-intensive ranking schemes like the QS and THE world university rankings for the foreseeable future. Numerous other schemes (e.g., the new High Impact Universities) would carry on, of course, for they use more easily available or generated forms of data.

A fourth option is the establishment of an organization that has the autonomy, and resources, to oversee rankings initiatives, especially those that depend upon university-provided data. There are no such organizations in existence for the only one that is even close to what we are calling for (the IREG Observatory on Academic Ranking and Excellence) suffers from the inclusion of too many rankers on its executive committee (a recipe for serious conflicts of interest), and member fees for a significant portion of its budget (ditto).

In closing, the acrimonious split between QS and Times Higher Education, and the formal inclusion of Thomson Reuters into the world university ranking world, has elevated this phenomenon to a new ‘higher-stakes’ level. Given these developments, given the expenses associated with providing the data, given some of the glaring errors or biases associated with the 2010 rankings, and given the problems associated with using university-scaled quantitative measures to assess ‘quality’ in a relative sense, we think it is high time for some new forms of action. And by action we don’t mean more griping about methodology, but attention to the ranking system that universities are embedded in, yet have singularly failed to construct.

The current world university rankings juggernaut is blinding us, yet innovative new assessment schemes — schemes that take into account the diversity of institutional geographies, profiles, missions, and stakeholders — could be fashioned if we take pause. It is time to make more proactive decisions about just what types of values and practices should be underlying comparative institutional assessments within the emerging global higher education landscape.

Kris Olds, Ellen Hazelkorn & Susan Robertson

Rankings: a case of blurry pictures of the academic landscape?

Editors’ note: this guest entry has been kindly contributed by Pablo Achard (University of Geneva).  After a PhD in particle physics at CERN and the University of Geneva (Switzerland), Pablo Achard (pictured to the right) moved to the universities of Marseilles (France) then Antwerp (Belgium) and Brandeis (MA) to pursue research in computational neurosciences. He currently works at the University of Geneva where he supports the Rectorate on bibliometrics and strategic planning issues. Our thanks to Dr. Achard for this ‘insiders’ take on the challenges of making sense of world university rankings. 

Kris Olds & Susan Robertson

~~~~~~~~~~~~~~

If the national rankings of universities can be traced back in the 19th century, international rankings appeared somewhere in the beginning of the 21st century [1]. Shanghai Jiao Tong University’s and Times Higher Education’s (THE) rankings were among the pioneers and remain among the most visible ones. But you might have heard of similar league tables designed by the CSIC, the University of Leiden, the HEEACT, QS, the University of Western Australia, RatER, Mines Paris Tech, etc. Such a proliferation certainly responds to a high demand. But what are they worth? I argue here that rankings are blurry pictures of the academic landscape. As such, they are much better than complete blindness but should be used with great care.

Blurry pictures

The image of the academic landscape grabbed by the rankings is always a bit out-of-focus. This is improving with time and we should acknowledge the rankers who make considerable efforts to improve the sharpness. Nonetheless, the sharp image remains an impossible to reach ideal.

First of all, it is very difficult to get clean and comparable data on such a large scale. The reality is always grey, the action of counting is black or white. Take such a central element as a “researcher”. What should you count? Heads or full-time equivalents? Full-time equivalents based on their contracts or the effective time spent at the university? Do you include PhD “students”? Visiting scholars? Professors on sabbaticals? Research engineers? Retired professors who still run a lab? Deans who don’t? What do you do with researchers affiliated with non-university research organizations still loosely connected to a university (think of Germany or France here)? And how do you collect the data?

This toughness to obtain clean and comparable data is the main reason for the lack of any good indicator about teaching quality. To do it properly, one would need to evaluate the level of knowledge of the students upon graduation, and possibly compare it with their level when they entered the university. To this aim, OECD is launching a project called AHELO, but it is still in its pilot phase. In the meantime, some rankers use poor proxies (like the percentage of international students) while others focus their attention on research outcomes only.

Second, some indicators are very sensitive to “noise” due to small statistics. This is the case for the number of Nobel prizes used by the Shanghai’s ranking. No doubt that having 20 of them in your faculty says something about its quality. But having one, obtained years ago, for a work partly or fully done elsewhere? Because of the long tailed distribution of the university rankings, such a unique event won’t push a university ranked 100 into the top 10, but a university ranked 500 can win more than a hundred places.

This dynamic seemed to occur in the most recent THE ranking. In their new methodology, the “citation impact” of a university counts for one third of the final note. Not many details were given on how this impact is calculated. But the description on the THE’s website and the way this impact is calculated by Thomson Reuters – who provides the data to THE – in its commercial product InCites. makes me believe that they used the so-called “Leiden crown indicator”. This indicator is a welcome improvement to the raw ratio of citations per publications since it takes into account the citation behaviours of the different disciplines. But it suffers from instability if you look at a small set of publications or at publications in fields where you don’t expect many citations [2]: the denominator can become very small, leading to rocket high ratios. This is likely what happened with the Alexandria University. According to this indicator, this Alexandria ranks 4th in the world, surpassed only by Caltech, MIT and Princeton. This is an unexpected result for anyone who knows the world research landscape [3].

Third, it is well documented that the act of measuring triggers the act of manipulating the measure. And this is made easy when the data are provided by the university themselves, as for the THE or QS rankings. One can only be suspicious when reading the cases emphasized by Bookstein and colleagues. “For whatever reason, the quantity THES assigned to the University of Copenhagen staff-student ratio went from 51 (the sample median) in 2007 to 100 (a score attained by only 12 other schools in the top 200) […] Without this boost, Copenhagen’s […] ranking would have been 94 instead of 51. Another school with a 100 student-staff rating in 2009, Ecole Normale Supérieure, Paris, rose from the value of 68 just a year earlier, […] thus earning a ranking of 28 instead of 48.”

Pictures of a landscape are taken from a given point of view

But let’s suppose that the rankers can improve their indicators to obtain perfectly focused images. Let’s imagine that we have clean, robust and hardly manipulable data to rely on. Would the rankings give a neutral picture of the academic landscape? Certainly not. There is no such thing as “neutrality” in any social construct.

Some rankings are built with a precise output in mind. The most laughable example of this was Mines Paris Tech’s ranking, placing itself and four other French “grandes écoles” in the top 20. This is probably the worst flaw of any ranking. But other types of biases are always present, even if less visible.

Most rankings are built with a precise question in mind. Let’s look at the evaluation of the impact of research. Are you interested in finding the key players, in which case the volume of citations is one way to go? Or are you interested in finding the most efficient institutions, in which case you would normalize the citations to some input (number of articles or number of researchers or budget)? Different questions need different indicators, hence different rankings. This is the approach followed by Leiden which publishes several rankings at a time. However this is not the sexiest and media-friendly approach.

Finally, all rankings are built with a model of what a good university is in mind. “The basic problem is that there is no definition of the ideal university”, a point made forcefully today by University College London’s Vice-Chancellor. Often, the Harvard model is the implicit model. In this case, getting Harvard on top is a way to check for “mistakes” in the design of the methodology. But the missions of the university are many. One usually talks about the production (research) and the dissemination (teaching) of knowledge, together with a “third mission” towards society that can in turn have many different meanings, from the creation of spin-offs to the reduction of social inequities. For these different missions, different indicators are to be used. The salary of fresh graduates is probably a good indicator to judge MBAs and certainly a bad one for liberal art colleges.

To pursue the metaphor with photography, every single snapshot is taken from a given point of view and with a given aim. Point-of-views and aims can be visible as it is the case in artistic photography. They can also pretend to neutrality, as in photojournalism. But this neutrality is wishful thinking. The same applies for rankings.

Useful pictures

Rankings are nevertheless useful pictures. Insiders who have a comprehensive knowledge of the global academic landscape understandably laugh at rankings’ flaws. However the increase in the number of rankings and in their use tells us that they fill a need. Rankings can be viewed as the dragon of New Public Management and accountability assaulting the ivory tower of disinterested knowledge. They certainly participate to a global shift in the contract between society and universities. But I can hardly believe that the Times would spend thousands if not millions for such a purpose.

What then is the social use of rankings? I think they are the most accessible vision of the academic landscape for millions of “outsiders”. The CSIC ranks around 20,000 (yes twenty thousand!) higher education institutions. Who can expect everyone to be aware of their qualities?  Think of young students, employers, politicians or academics from not-so-well connected universities. Is everyone in the Midwest able to evaluate the quality of research at a school strangely named Eidgenössische Technische Hochschule Zürich?

Even to insiders, rankings tell us something. Thanks to improvements in the picture’s quality and to the multiplication of point-of-views, rankings form an image that is not uninteresting. If a university is regularly in the top 20, this is something significant. You can expect to find there one of the best research and teaching environment. If it is regularly in the top 300, this is also significant. You can expect to find one of the few universities where the “global brain market” takes place. If a country – like China – increases its share of good universities over time, this is significant and that a long-term ‘improvement’ (at least in the direction of what is being ranked as important) of its higher education system is under way.

Of course, any important decision concerning where to study, where to work or which project to embark on must be taken with more criteria than rankings. As one would never go for mountain climbing based solely on blurry snapshots of the mountain range, one should not use rankings as a unique source of information about universities.

Pablo Achard


Notes

[1] See The Great Brain Race. How Global Universities are Reshaping the World, Ben Wildavsky, Princeton Press 2010; and more specifically its chapter 4 “College rankings go global”.

[2] The Leiden researchers have recently decided to adopt a more robust indicator for their studies http://arxiv.org/abs/1003.2167 But whatever the indicator used, the problem will remain for small statistical samples.

[3] See recent discussions on the University Ranking Watch blog for more details on this issue.



A case for free, open and timely access to world university rankings data

Well, the 2010 QS World University Rankings® were released last week and the results are continuing to generate considerable attention in the world’s media (link here for a pre-programmed Google news search of coverage).

For a range of reasons, news that QS placed Cambridge in the No. 1 spot, above Harvard, spurred on much of this media coverage (see, for example, these stories in Time, the Christian Science Monitor, and Al Jazeera). As Al Jazeera put it: “Did the Earth’s axis shift? Almost: Cambridge has nudged Harvard out of the number one spot on one major ranking system.”

Interest in the Cambridge over Harvard outcome led QS (which stands for QS Quacquarelli Symonds Ltd) to release this story (‘2010 QS World University Rankings® – Cambridge strikes back’). Do note, however, that Harvard scored 99.18/100 while QS gave Cambridge 100/100 (hence the 1/2 placing). For non-rankings watchers, Harvard had been pegged as No 1 for the previous five years in rankings that QS published in association with Times Higher Education.

As the QS story notes, the economic crisis in the US, as well as the reduction of other US universities with respect to their share of “international faculty,” was the main cause of Harvard’s slide:

In the US, cost-cutting reductions in academic staff hire are reflected among many of the leading universities in this year’s rankings. Yale also dropped 19 places for international faculty, Chicago dropped 8, Caltech dropped 20, and UPenn dropped 53 places in this measure. However, despite these issues the US retains its dominance at the top of the table, with 20 of the top 50 and 31 of the top 100 universities in the overall table.

Facts like these aside, what we would like to highlight is that all of this information gathering and dissemination — both the back-end (pre-ranking) provision of the data, and the front end (post-ranking) acquisition of the data — focuses the majority of costs on the universities and the majority of benefits on the rankers.

The first cost to universities is the provision of the data. As one of us noted in a recent entry (‘Bibliometrics, global rankings, and transparency‘):

Data demands are becoming very resource consuming for universities. For example, the QS template currently being dealt with by universities around the world shows 14 main categories with sub-categories for each: all together there are 60 data fields, of which 10 are critical to the QS ranking exercise, to be launched in October 2010. Path dependency dynamics clearly exist for once the pipelines are laid the complexity of data requests can be gradually ramped up.

Keep it mind that the data is provided for free, though in the end it is a cost primarily borne by the taxpayer (for most universities are public). It is the taxpayer that pays the majority of the administrators’ salaries to enable them to compile the data and submit it to the rankers.

A second, though indirect and obscured cost, relates to the use of rankings data by credit rating agencies like Moody’s or Standards and Poors in their ratings of the credit-worthiness of universities. We’ve reported on this in earlier blog entries (e.g., ‘‘Passing judgment’: the role of credit rating agencies in the global governance of UK universities‘). Given that cost of borrowing for universities is determined by their credit-worthiness, and rankings are used in this process, we can conclude that any increase in the cost of borrowing is actually also an increase in the cost of the university to the taxpayer.

Third, rankings can alter the views of people (students, faculty, investors) making decisions about mobility or resource allocation, and these decisions inevitably generate direct financial consequences for institutions and host city-regions. Given this it seems only fair that universities and city-region development agencies should be able to freely use the base rankings data for self-reflection and strategic planning, if they so choose to.

A fourth cost is subsequent access to the data. The rankings are released via a strategically planned media blitz, as are hints at causes for shifts in the placement of universities, but access to the base data — the data our administrative colleagues in universities in Canada, the US, the UK, Sweden, etc., supplied to the rankers — is not fully enabled.  Rather, this freely provided data is used as the basis for:

the development of ancillary services and benchmarking capabilities that can be sold back to universities, funding councils, foundations, regional organizations (e.g., the European Commission which is intensely involved in benchmarking and now bankrolling a European ranking scheme), and the like.

Consider, for example, this Thomson Reuters statement on their Global Institutional Profiles Project website:

The first use of the data generated in the Global Institutional Profiles Project was to inform the Times Higher Education World University Ranking. However, there are many other services that will rely on the Profiles Project data. For example the data can be used to inform customized analytical reporting or customized data sets for a specific customer’s needs.

Thomson Reuters is developing a platform designed for easy access and interpretation of this valuable data set. The platform will combine different sets of key indicators, with peer benchmarking and visualization tools to allow users to quickly identify the key strengths of institutions across a wide variety of aspects and subjects.

Now, as QS’s Ben Sowter put it:

Despite the inevitable efforts that will be required to respond to a wide variety of enquiries from academics, journalists and institutions over the coming days there is always a deep sense of satisfaction when our results emerge. The tension visibly lifts from the team as we move into a new phase of our work – that of explaining how and why it works as opposed to actually conducting the work.

This year has been the most intense yet, we have grown the team and introduced a new system, introduced new translations of surveys, spent more time poring over the detail in the Scopus data we receive, sent out the most thorough fact files yet to universities in advance of the release – we have driven engagement to a new level – evaluating, speaking to and visiting more universities than ever.

The point we would like to make is that the process of taking “engagement to a new level” — a process coordinated and enabled by QS Quacquarelli Symonds Ltd and Times Higher Education/Thomson Reuters — is solely dependent upon universities being willing to provide data to these firms for free.

Given all of these costs, access to all of the base data beyond the simple rankings available on websites like the THE World University Rankings 2010 (due out on 16 September), or QS World University Rankings Results 2010, should be freely accessible to all.

Detailed information should also be provided about which unit, within each university, provided the rankers with the data. This would enable faculty, students and staff within ranked institutions to engage in dialogue about ranking outcomes, methodologies, and so on, should they choose to. This would also prevent confusing mix-ups such as what occurred at the University of Waterloo (UW) this week when:

UW representative Martin van Nierop said he hadn’t heard that QS had contacted the university, even though QS’s website says universities are invited to submit names of employers and professors at other universities to provide opinions. Data analysts at UW are checking the rankings to see where the information came from.

And access to this data should be provided on a timely basis, as in exactly when the rankings are released to the media and the general public.

In closing, we are making a case for free, open and timely access to all world university rankings data from January 2011, ideally on a voluntary basis. Alternative mechanisms, including intergovernmental agreements in the context of the next Global Bologna Policy Forum (in 2012), could also facilitate such an outcome.

If we have learned anything to date about the open access debate, and ‘climategate’, greater transparency helps everyone — the rankers (who will get more informed and timely feedback about their adopted methodologies), universities (faculty, students & staff), scholars and students interested in the nature of ranking methodologies, government ministries and departments, and the taxpayers who support universities (and hence the rankers).

Inspiration for this case comes from many people, as well as the open access agenda that is partly driven on the principle that taxpayer funded research generates research outcomes that society should have free and open access to, and in a timely fashion.  Surely this open access principle applies just as well to university rankings data!

Another reason society deserves to have free, open and timely access to the data is that a change in practices will shed light on how the organizations ranking universities implement their methodologies; methodologies that are ever changing (and hence more open to error).

Finer-grained access to the data would enable us to check out exactly why, for example, Harvard deserved a 99.18/100 while Cambridge was allocated a 100/100. As professors who mark student papers, outcomes this close lead us to cross-check the data, lest we subtly favour one student over another for X, Y or Z reasons. And cross-checking is even more important given that ranking is a highly mediatized phenomenon, as is clearly evident this week betwixt and between releases of the hyper-competitive QS vs THE world university rankings.

Free, open and timely access to the world university rankings data is arguably a win-win-win scenario, though it will admittedly rebalance the current focus of the majority of the costs on the universities, and the majority of the benefits on the rankers. Yet it is in the interest of the world’s universities, and the taxpayers who support these universities, for this to happen.

Kris Olds & Susan Robertson

Bibliometrics, global rankings, and transparency

Why do we care so much about the actual and potential uses of bibliometrics (“the generic term for data about publications,” according to the OECD), and world university ranking methodologies, but care so little about the private sector firms, and their inter-firm relations, that drive the bibliometrics/global rankings agenda forward?

This question came to mind when I was reading the 17 June 2010 issue of Nature magazine, which includes a detailed assessment of various aspects of bibliometrics, including the value of “science metrics” to assess aspects of the impact of research output (e.g., publications) as well as “individual scientific achievement”.

The Nature special issue, especially Richard Van Noorden’s survey on the “rapidly evolving ecosystem” of [biblio]metrics, is well worth a read. Even though bibliometrics can be a problematic and fraught dimension of academic life, they are rapidly becoming an accepted dimension of the governance (broadly defined) of higher education and research. Bibliometrics are generating a diverse and increasingly deep impact regarding the governance process at a range of scales, from the individual (a key focus of the Nature special issue) through to the unit/department, the university, the discipline/field, the national, the regional, and the global.

Now while the development process of this “eco-system” is rapidly changing, and a plethora of innovations are occurring regarding how different disciplines/fields should or should not utilize bibliometrics to better understand the nature and impact of knowledge production and dissemination, it is interesting to stand back and think about the non-state actors producing, for profit, this form of technology that meshes remarkably well with our contemporary audit culture.

In today’s entry, I’ve got two main points to make, before concluding with some questions to consider.

First, it seems to me that there is a disproportionate amount of research being conducted on the uses and abuses of metrics in contrast to research on who the producers of these metrics are, how these firms and their inter-firm relations operate, and how they attempt to influence the nature of academic practice around the world.

Now, I am not seeking to imply that firms such as Elsevier (producer of Scopus), Thomson Reuters (producer of the ISI Web of Knowledge), and Google (producer of Google Scholar), are necessarily generating negative impacts (see, for example, ‘Regional content expansion in Web of Science®: opening borders to exploration’, a good news news story from Thomson Reuters that we happily sought out), but I want to make the point that there is a glaring disjuncture between the volume of research conducted on bibliometrics versus research on these firms (the bibliometricians), and how these technologies are brought to life and to market. For example, a search of Thomson Reuter’s ISI Web of Knowledge for terms like Scopus, Thomson Reuters, Web of Science and bibliometrics generates a nearly endless list of articles comparing the main data bases, the innovations associated with them, and so on, but amazingly little research on Elsevier or Thomson Reuters (i.e. the firms).  From thick to thin, indeed, and somewhat analogous to the lack of substantial research available on ratings agencies such as Moody’s or Standard and Poor’s.

Second, and on a related note, the role of firms such as Elsevier and Thomson Reuters, not to mention QS Quacquarelli Symonds Ltd, and TSL Education Ltd, in fueling the global rankings phenomenon has received remarkably little attention in contrast to vigorous debates about methodologies. For example, the four main global ranking schemes, past and present:

all draw from the databases provided by Thomson Reuters and Elsevier.

One of the interesting aspects of the involvement of these firms with the rankings phenomenon is that they have helped to create a normalized expectation that rankings happen once per year, even though there is no clear (and certainly not stated) logic for such a frequency. Why not every 3-4 years, for example, perhaps in alignment with the World Cup or the Olympics? I can understand why rankings have to happen more frequently than the US’ long-delayed National Research Council (NRC) scheme, and they certainly need to happen more frequently than the years France wins the World Cup championship title (sorry…) but why rank every single year?

But, let’s think about this issue with the firms in mind versus the pros and cons of the methodologies in mind.

From a firm perspective, the annual cycle arguably needs to become normalized for it is a mechanism to extract freely provided data out of universities. This data is clearly used to rank but is also used to feed into the development of ancillary services and benchmarking capabilities that can be sold back to universities, funding councils, foundations, regional organizations (e.g., the European Commission which is intensely involved in benchmarking and now bankrolling a European ranking scheme), and the like.

QS Quacquarelli Symonds Ltd, for example, was marketing such services (see an extract, above, from a brochure) at their stand at the recent NAFSA conference in Kansas City, while Thomson Reuters has been busy developing what they deem the Global Institutional Profiles Project. This latter project is being spearheaded by Jonathon Adams, a former Leeds University staff member who established a private firm (Evidence Ltd) in the early 1990s that rode the UK’s Research Assessment Excellence (RAE) and European ERA waves before being acquired by Thomson Reuters in January 2009.

Sophisticated on-line data entry portals (see a screen grab of one above) are also being created. These portals build a free-flow (at least one one-way) pipeline between the administrative offices of hundreds of universities around the world and the firms doing the ranking.

Data demands are becoming very resource consuming for universities. For example, the QS template currently being dealt with by universities around the world shows 14 main categories with sub-categories for each: all together there are 60 data fields, of which 10 are critical to the QS ranking exercise, to be launched in October 2010. Path dependency dynamics clearly exist for once the pipelines are laid the complexity of data requests can be gradually ramped up.

A key objective, then, seems to involve using annual global rankings to update fee-generating databases, not to mention boost intra-firm knowledge bases and capabilities (for consultancies), all operational at the global scale.

In closing, is the posited disjuncture between research on bibliometrics vs research on bibliometricians and the information service firms these units are embedded within worth noting and doing something about?

Second, what is the rationale for annual rankings versus a more measured rankings window, in a temporal sense? Indeed why not synchronize all global rankings to specific years (e.g., 2010, 2014, 2018) so as to reduce strains on universities vis a vis the provision of data, and enable timely comparisons between competing schemes. A more measured pace would arguably reflect the actual pace of change within our higher education institutions versus the needs of these private firms.

And third, are firms like Thomson Reuters and Elsevier, as well as their partners (esp., QS Quacquarelli Symonds Ltd and TSL Education Ltd), being as transparent as they should be about the nature of their operations? Perhaps it would be useful to have accessible disclosures/discussions about:

  • What happens with all of the data that universities freely provide?
  • What is stipulated in the contracts between teams of rankers (e.g., Times Higher Education and Thomson Reuters)?
  • What rights do universities have regarding the open examination and use of all of the data and associated analyses created on the basis of the data universities originally provided?
  • Who should be governing, or at least observing, the relationship between these firms and the world’s universities? Is this relationship best continued on a bilateral firm to university basis? Or is the current approach inadequate? If it is perceived to be inadequate, should other types of actors be brought into the picture at the national scale (e.g., the US Department of Education or national associations of universities), the regional-scale (e.g., the European University Association), and/or the global scale (e.g., the International Association of Universities)?

In short, is it not time that the transparency agenda the world’s universities are being subjected to also be applied to the private sector firms that are driving the bibliometrics/global rankings agenda forward?

Kris Olds