Measuring Academic Research in Canada: Field-Normalized Academic Rankings 2012

Greetings from Chicago where I’m about to start a meeting at the Federal Reserve Bank of Chicago on Mobilizing Higher Education to Support Regional Innovation and A Knowledge-Driven Economy. The main objective of meeting here is to explore a possible higher education focused-follow-up to the OECD’s Territorial Review: the Chicago Tri-State Metro Area. I’ll develop an entry about this fascinating topic in the new future.

Before I head out to get my wake-up coffee, though, I wanted to alert you to a ‘hot-off the press’ report by Toronto-based Higher Education Strategy Associates (HESA). The report can be downloaded here in PDF format, and I’ve pasted in the quasi-press release below which just arrived in my email InBox.

More food for fodder on the rankings debate, and sure to interest Canadian higher ed & research people, not to mention their international partners (current & prospective).

Kris Olds

>>>>>>

Research Rankings

August 28, 2012
Alex Usher

Today, we at HESA are releasing our brand new Canadian Research Rankings. We’re pretty proud of what we’ve accomplished here, so let me tell you a bit about them.

Unlike previous Canadian research rankings conducted by Research InfoSource, these aren’t simply about raw money and publication totals. As we’ve already seen, those measures tend to privilege strength in some disciplines (the high-citation, high-cost ones) more than others. Institutions which are good in low-citation, low-cost disciplines simply never get recognized in these schemes.

Our rankings get around this problem by field-normalizing all results by discipline. We measure institutions’ current research strength through granting council award data, and we measure the depth of their academic capital (“deposits of erudition,” if you will) through use of the H-index, (which, if you’ll recall, we used back in the spring to look at top academic disciplines). In both cases, we determine the national average of grants and H-indexes in every discipline, and then adjust each individual researcher’s and department’s scores to be a function of that average.

(Well, not quite all disciplines. We don’t do medicine because it’s sometimes awfully hard to tell who is staff and who is not, given the blurry lines between universities and hospitals.)

Our methods help to correct some of the field biases of normal research rankings. But to make things even less biased, we separate out performance in SSHRC-funded disciplines and NSERC-funded disciplines, so as to better examine strengths and weaknesses in each of these areas. But, it turns out, strength in one is substantially correlated with strength in the other. In fact, the top university in both areas is the same: the University of British Columbia (a round of applause, if you please).

I hope you’ll read the full report, but just to give you a taste, here’s our top ten for SSHRC and NSERC disciplines.

Eyebrows furrowed because of Rimouski? Get over your preconceptions that research strength is a function of size. Though that’s usually the case, small institutions with high average faculty productivity can occasionally look pretty good as well.

More tomorrow.

Governing world university rankers: an agenda for much needed reform

Is it now time to ensure that world university rankers are overseen, if not governed, so as to achieve better quality assessments of the differential contributions of universities in the global higher education and research landscape?

In this brief entry we make a case that something needs to be done about the system in which world university rankers operate. We have two brief points to make about why something needs to be done, and then we outline some options for moving beyond today’s status quo situation.

First, while both universities and rankers are all interested in how well universities are positioned in the emerging global higher education landscape, power over the process, as currently exercised, rests solely with the rankers. Clearly firms like QS and Times Higher Education are open to input, advice, and indeed critique, but in the end they, along with information services firms like Thomson Reuters, decide:

  • How the methodology is configured
  • How the methodology is implemented and vetted
  • When and how the rankings outcomes are released
  • Who is permitted access to the base data
  • When and how errors are corrected in rankings-related publications
  • What lessons are learned from errors
  • How the data is subsequently used

Rankers have authored the process, and universities (not to mention associations of universities, and ministries of education) have simply handed over the raw data. Observers of this process might be forgiven for thinking that universities have acquiesced to the rankers’ desires with remarkably little thought. How and why we’ve ended up in such a state of affairs is a fascinating (if not alarming) indicator of how fearful many universities are of being erased from increasingly mediatized viewpoints, and how slow universities and governments have been in adjusting to the globalization of higher education and research, including the desectoralization process. This situation has some parallels with the ways that ratings agencies (e.g., Standard and Poor’s or Moody’s) have been able to operate over the last several decades.

Second, and as has been noted in two of our recent entries:

the costs associated with providing rankers (especially QS and THE/Thomson Reuters) with data are increasing concentrated on universities.

On a related note, there is no rationale for the now annual rankings cycle that the rankers have been successfully been able to normalize. What really changes on a year-to-year basis apart from changes in ranking methodologies? Or, to paraphrase Macquarie University’s vice-chancellor, Steven Schwartz, in this Monday’s Sydney Morning Herald:

“I’ve never quite adjusted myself to the idea that universities can jump around from year to year like bungy jumpers,” he says.

”They’re like huge oil tankers; they take forever to turn around. Anybody who works in a university realises how little they change from year to year.”

Indeed if the rationale for an annual cycle of rankings were so obvious, government ministries would surely facilitate more annual assessment exercises. Even the most managerial and bibliometric-predisposed of governments anywhere – in the UK – has spaced its intense research assessment exercise out over a 4-6 year cycle. And yet the rankers have universities on the run. Why? Because this cycle facilitates data provision for commercial databases, and it enables increasingly competitive rankers to construct their own lucrative markets. This, perhaps, explains this 6 July 2010 reaction, from QS to a call for a four vs one year rankings cycle in GlobalHigherEd:

Thus we have a situation where rankers seeking to construct media/information service markets are driving up data provision time and costs for universities, facilitating continual change in methodologies, and as a matter of consequence generating some surreal swings in ranked positions. Signs abound that rankers are driving too hard, taking too many risks, while failing to respect universities, especially those outside of the upper echelon of the rank orders.

Assuming you agree that something should happen, the options for action are many. Given what we know about the rankers, and the universities that are ranked, we have developed four options, in no order of priority, to further discussion on this topic. Clearly there are other options, and we welcome alternative suggestions, as well as critiques of our ideas below.

The first option for action is the creation of an ad-hoc task force by 2-3 associations of universities located within several world regions, the International Association of Universities (IAU), and one or more international consortia of universities. Such an initiative could build on the work of the European University Association (EAU) which created a regionally-specific task force in early 2010. Following an agreement to halt world university rankings for two years (2011 & 2012), this new ad-hoc task force could commission a series of studies regarding the world university rankings phenomenon, not to mention the development of alternative options for assessing, benchmarking and comparing higher education performance and quality. In the end the current status quo regarding world university rankings could be sanctioned, but such an approach could just as easily lead to new approaches, new analytical instruments, and new concepts that might better shed light on the diverse impacts of contemporary universities.

A second option is an inter-governmental agreement about the conditions in which world university rankings can occur. This agreement could be forged in the context of bi-lateral relations between ministers in select countries: a US-UK agreement, for example, would ensure that the rankers reform their practices. A variation on this theme is an agreement of ministers of education (or their equivalent) in the context of the annual G8 University Summit (to be held in 2011), or the next Global Bologna Policy Forum (to be held in 2012) that will bring together 68+ ministers of education.

The third option for action is non-engagement, as in an organized boycott. This option would have to be pushed by one or more key associations of universities. The outcome of this strategy, assuming it is effective, is the shutdown of unique data-intensive ranking schemes like the QS and THE world university rankings for the foreseeable future. Numerous other schemes (e.g., the new High Impact Universities) would carry on, of course, for they use more easily available or generated forms of data.

A fourth option is the establishment of an organization that has the autonomy, and resources, to oversee rankings initiatives, especially those that depend upon university-provided data. There are no such organizations in existence for the only one that is even close to what we are calling for (the IREG Observatory on Academic Ranking and Excellence) suffers from the inclusion of too many rankers on its executive committee (a recipe for serious conflicts of interest), and member fees for a significant portion of its budget (ditto).

In closing, the acrimonious split between QS and Times Higher Education, and the formal inclusion of Thomson Reuters into the world university ranking world, has elevated this phenomenon to a new ‘higher-stakes’ level. Given these developments, given the expenses associated with providing the data, given some of the glaring errors or biases associated with the 2010 rankings, and given the problems associated with using university-scaled quantitative measures to assess ‘quality’ in a relative sense, we think it is high time for some new forms of action. And by action we don’t mean more griping about methodology, but attention to the ranking system that universities are embedded in, yet have singularly failed to construct.

The current world university rankings juggernaut is blinding us, yet innovative new assessment schemes — schemes that take into account the diversity of institutional geographies, profiles, missions, and stakeholders — could be fashioned if we take pause. It is time to make more proactive decisions about just what types of values and practices should be underlying comparative institutional assessments within the emerging global higher education landscape.

Kris Olds, Ellen Hazelkorn & Susan Robertson

The 2010 THE World University Rankings, powered by Thomson Reuters

The new 2010 Times Higher Education (THE) World University Rankings issue has just been released and we will see, no doubt, plenty of discussions and debate about the outcome. Like them or not, rankings are here to stay and the battle is now on to shape their methodologies, their frequency, the level of detail they freely provide to ranked universities and the public, their oversight (and perhaps governance?), their conceptualization, and so on.

Leaving aside the ranking outcome (the top 30, from a screen grab of the top 200, is pasted in below), it worth noting that this new rankings scheme has been produced with the analytic insights, power, and savvy, of Thomson Reuters, a company with 2009 revenue of US $12.9 billion and “over 55,000 employees in more than 100 countries”.

As discussed on GlobalHigherEd before:

Thomson Reuters is a private global information services firm, and a highly respected one at that.  Apart from ‘deep pockets’, they have knowledgeable staff, and a not insignificant number of them. For example, on 14 September Phil Baty, of Times Higher Education sent out this fact via their Twitter feed:

2 days to #THEWUR. Fact: Thomson Reuters involved more than 100 staff members in its global profiles project, which fuels the rankings

The incorporation of Thomson Reuters into the rankings games by Times Higher Education was a strategically smart move for this media company for it arguably (a) enhances their capacity (in principle) to improve ranking methodology and implementation, and (b) improves the respect the ranking exercise is likely to get in many quarters. Thomson Reuters is, thus, an analytical-cum-legitimacy vehicle of sorts.

What does this mean regarding the 2010 THE World University Rankings outcome?  Well, regardless of your views on the uses and abuses of rankings, this Thomson Reuters-backed outcome will generate more versus less attention from the media, ministries of education, and universities themselves.  And if the outcome generates any surprises, it will make it a harder job for some university leaders to provide an explanation as to why their universities have fallen down the rankings ladder.  In other words, the data will be perceived to be more reliable, and the methodology more rigorously framed and implemented, even if methodological problems continue to exist.

Yet, this is a new partnership, and a new methodology, and it should therefore be counted as YEAR 1 of the THE World University Rankings.

As the logo above makes it very clear, this is a powered (up) outcome, with power at play on more levels than one: welcome to a new ‘roll-out’ phase in the construction of what could be deemed a global ‘audit culture’.

Kris Olds

Multi-scalar governance technologies vs recurring revenue: the dual logics of the rankings phenomenon

Our most recent entry (‘University Systems Ranking (USR)’: an alternative ranking framework from EU think-tank‘) is getting heavy traffic these days, a sign that the rankings phenomenon just won’t go away.  Indeed there is every sign that debates about rankings will be heating up over the next 1-2 year in particular, courtesy of the desire of stakeholders to better understand rankings, generate ‘recurring revenue’ off of rankings, and provide new governance technologies to restructure higher education and research systems.

This said I continue to be struck, as I travel to selective parts of the world for work, by the diversity of scalar emphases at play.

eiffeleu1In France, for example, the broad discourse about rankings elevates the importance of the national (i.e., French) and regional (i.e., European) scales, and only then does the university scale (which I will refer to as the institutional scale in this entry) come into play in importance terms. This situation reflects the strong role of the national state in governing and funding France’s higher education system, and France’s role in European development debates (including, at the moment, presidency of the Council of the European Union).

In UK it is the disciplinary/field and then the institutional scales that matter most, with the institutional made up of a long list of ranked disciplines/fields. Once the new Research Assessment Exercise (RAE) comes out in late 2008 we will see the institutional assess the position of each of their disciplines/fields, which will then lead to more support or relatively rapid allocation of the hatchet at the disciplinary/field level. This is in part because much national government funding (via the Higher Education Funding Council for England (HEFCE), the Scottish Funding Council (SFC), the Higher Education Funding Council for Wales (HEFCW) and the Department for Employment and Learning, Northern Ireland (DEL)) to each university is structurally dependent upon the relative rankings of each university’s position in the RAE, which is the aggregate effect of the position of the array of fields/disciplines in any one university (see this list from the University of Manchester for an example). The UK is, of course, concerned about its relative place in the two main global ranking schemes, but it doing well at the moment so the scale of concern is of a lower order than most other countries (including all other European countries). Credit rating agencies also assess and factor in rankings with respect to UK universities (e.g. see ‘Passing judgment’: the role of credit rating agencies in the global governance of UK universities‘).

In the US – supposedly the most marketized of contexts – there is highly variably concern with rankings.  Disciplines/fields ranked by media outlets like U.S. News & World Report are concerned, to be sure, but U.S. News & World Report does not allocate funding. Even the National Research Council (NRC) rankings matter less in the USA given that its effects (assuming it eventually comes out following multiple delays) are more diffuse. The NRC rankings are taken note of by deans and other senior administrators, and also faculty, albeit selectively. Again, there is no higher education system in the US – there are systems. I’ve worked in Singapore, England and the US as a faculty member and the US is by far the least addled or concerned by ranking systems, for good and for bad.

While the diversity of ranking dispositions at the national and institutional levels is heterogeneous in nature, the global rankings landscape is continuing to change, and quickly. In the remainder of this entry we’ll profile but two dimensions of the changes.

Anglo-American media networks and recurrent revenue

ustheFirst, new key media networks, largely Anglo-American private sector networks, have become intertwined.  As Inside Higher Ed put it on 24 November:

U.S. News & World Report on Friday announced a new, worldwide set of university rankings — which is really a repackaging of the international rankings produced this year in the Times Higher Education-QS World University Rankings. In some cases, U.S. News is arranging the rankings in different ways, but Robert Morse, director of rankings at the magazine, said that all data and the methodology were straight from the Times Higher’s rankings project, which is affiliated with the British publication about higher education. Asked if his magazine was just paying for reprint rights, Morse declined to discuss financial arrangements. But he said that it made sense for the magazine to look beyond the United States. “There is worldwide competition for the best faculty, best students and best research grants and researchers,” he said. He also said that, in the future, U.S. News may be involved in the methodology. Lloyd Thacker, founder of the Education Conservancy and a leading critic of U.S. News rankings, said of the magazine’s latest project: “The expansion of a business model that has profited at the expense of education is not surprising. This could challenge leaders to distinguish American higher education by providing better indicators of quality and by helping us think beyond ranking.”

This is an unexpected initiative, in some ways, given that the Times Higher Education-QS World University Rankings are already available on line and US New and World Report is simply repackaging these for sale in the American market. Yet if you adopt a market-making perspective this joint venture makes perfect sense. Annual versions of the Times Higher Education-QS World University Rankings will be reprinted in a familiar (to US readers) format, thereby enabling London-based TSL Education Ltd., London/Paris/Singapore-based QS Quacquarelli Symonds, and Washington DC-based U.S. News and World Report to generate recurring revenue with little new effort (apart from repackaging and distribution in the US). The enabling mechanism is, in this case, reprint rights fees. As we have noted before, this is a niche industry in formation, indeed.

More European angst and action

And second, at the regional level, European angst (an issue we profiled on 6 July in ‘Euro angsts, insights and actions regarding global university ranking schemes‘) about the nature and impact of rankings is leading to the production of critical reports on rankings methodologies, the sponsorship of high powered multi-stakeholder workshops, and the emergence of new proposals for European ranking schemes.

ecjrccoverSee, for example, this newly released report on rankings titled Higher Education Rankings: Robustness Issues and Critical Assessment, which is published by the European Commission Joint Research Centre, Institute for the Protection and Security of the Citizen, Centre for Research on Lifelong Learning (CRELL)

The press release is here, and a detailed abstract of the report is below:

The Academic Ranking of World Universities carried out annually by the Shanghai’s Jiao Tong University (mostly known as the ‘Shanghai ranking’) has become, beyond the intention of its developers, a reference for scholars and policy makers in the field of higher education. For example Aghion and co-workers at the Bruegel think tank use the index – together with other data collected by Bruegel researchers – for analysis of how to reform Europe’s universities, while French President Sarkozy has stressed the need for French universities to consolidate in order to promote their ranking under Jiao Tong. Given the political importance of this field the preparation of a new university ranking system is being considered by the French ministry of education.

The questions addressed in the present analysis is whether the Jiao Tong ranking serves the purposes it is used for, and whether its immediate European alternative, the British THES, can do better.

Robustness analysis of the Jiao Tong and THES ranking carried out by JRC researchers, and of an ad hoc created Jiao Tong-THES hybrid, shows that both measures fail when it comes to assessing Europe’s universities. Jiao Tong is only robust in the identification of the top performers, on either side of the Atlantic, but quite unreliable on the ordering of all other institutes. Furthermore Jiao Tong focuses only on the research performance of universities, and hence is based on the strong assumption that research is a universal proxy for education. THES is a step in the right direction in that it includes some measure of education quality, but is otherwise fragile in its ranking, undeniably biased towards British institutes and somehow inconsistent in the relation between subjective variables (from surveys) and objective data (e.g. citations).

JRC analysis is based on 88 universities for which both the THES and Jiao Tong rank were available. European universities covered by the present study thus constitute only about 0.5% of the population of Europe’s universities. Yet the fact that we are unable to reliably rank even the best European universities (apart from the 5 at the top) is a strong call for a better system, whose need is made acute by today’s policy focus on the reform of higher education. For most European students, teachers or researchers not even the Shanghai ranking – taken at face value and leaving aside the reservations raised in the present study – would tell which university is best in their own country. This is a problem for Europe, committed to make its education more comparable, its students more mobile and its researchers part of a European Research Area.

Various attempts in EU countries to address the issue of assessing higher education performance are briefly reviewed in the present study, which offers elements of analysis of which measurement problem could be addressed at the EU scale. [my emphasis]

While ostensibly “European”, does it really matter that the Times Higher Education-QS World University Ranking is produced by firms with European headquarters, while the Jiao Tong ranking is produced by an institution based in China?

The divergent logics underlying the production of discourses about rankings are also clearly visible in two related statements. At the bottom of the European Commission’s Joint Research Centre report summarized above we see “Reproduction is authorised provided the source is acknowledged”, while the Times Higher Education-QS World University Rankings, a market-making discourse, is accompanied by a lengthy copyright warning that can be viewed here.

Yet do not, for a minute, think that ‘Europe’ does not want to be ranked, or use rankings, as much if not more than any Asian or American or Australian institution. At a disciplinary/field level, for example, debates are quickly unfolding about the European Reference Index for the Humanities (ERIH), a European Science Foundation (ESF) backed initiative that has its origins in deliberations about the role of the humanities in the European Research Area. The ESF frames it this way:

Humanities research in Europe is multifaceted and rich in lively national, linguistic and intellectual traditions. Much of Europe’s Humanities scholarship is known to be first rate. However, there are specifities of Humanities research, that can make it difficult to assess and compare with other sciences. Also,  it is not possible to accurately apply to the Humanities assessment tools used to evaluate other types of research. As the transnational mobility of researchers continues to increase, so too does the transdisciplinarity of contemporary science. Humanities researchers must position themselves in changing international contexts and need a tool that offers benchmarking. This is why ERIH (European Reference Index for the Humanities) aims initially to identify, and gain more visibility for top-quality European Humanities research published in academic journals in, potentially, all European languages. It is a fully peer-reviewed, Europe-wide process, in which 15 expert panels sift and aggregate input received from funding agencies, subject associations and specialist research centres across the continent. In addition to being a reference index of the top journals in 15 areas of the Humanities, across the continent and beyond, it is intended that ERIH will be extended to include book-form publications and non-traditional formats. It is also intended that ERIH will form the backbone of a fully-fledged research information system for the Humanities.

See here for a defense of this ranking system by Michael Worton (Vice-Provost, University College London, and a member of the ERIH steering committee).  I was particularly struck by this comment:

However, the aim of the ERIH is not to assess the quality of individual outputs but to assess dissemination and impact. It can therefore provide something that the RAE cannot: it can be used for aggregate benchmarking of national research systems to determine the international standing of research carried out in a particular discipline in a particular country.

Link here for a Google weblog search on this debate, while a recent Chronicle of Higher Education article (‘New Ratings of Humanities Journals Do More Than Rank — They Rankle’) is also worth reviewing.

Thus we see a new rankings initiative emerging to enable (in theory) Europe to better codify its highly developed humanities presence on the global research landscape, but in a way that will enable national (at the intra-European scale) peaks (and presumably) valleys of quality output to be mapped for the humanities, but also for specific disciplines/fields. Imagine the governance opportunities available, at multiple scales, if this scheme is operationalized.

And finally, at the European scale again, University World News noted, on 23 November, that:

The European Union is planning to launch its own international higher education rankings, with emphasis on helping students make informed choices about where to study and encouraging their mobility. Odile Quintin, the European Commission’s Director-General of Education and Culture, announced she would call for proposals before the end of the year, with the first classification appearing in 2010.

A European classification would probably be compiled along the same lines as the German Centre for Higher Education Development Excellence Ranking.

European actors are being spurred into such action by multiple forces, some internal (including the perceived need to ‘modernize European universities in the context of Lisbon and the European Research Area), some external (Shanghai Jiao Tong; Times Higher QS), and some of a global dimension (e.g., audit culture; competition for mobile students).

eurankingsprogThis latest push is also due to the French presidency of the Council of the European Union, as noted above, which is facilitating action at the regional and national scales. See, for example, details on a Paris-based conference titled ‘International comparison of education systems: a european model?’ which was held on 13-14 November 2008. As noted in the programme, the:

objective of the conference is to bring to the fore the strengths and weaknesses of the different international and European education systems, while highlighting the need for regular and objective assessment of the reforms undertaken by European Member States by means of appropriate indicators. It will notably assist in taking stock of:
– the current state and performance of the different European education systems:
– the ability of the different European education systems to curb the rate of failure in schools,
– the relative effectiveness of amounts spent on education by the different Member States.

The programme and list of speakers is worth perusing to acquire a sense of the broad agenda being put forward.

Multi-scalar governance vs (?) recurring revenue: the emerging dual logics of the rankings phenomenon

The rankings phenomenon is here to stay. But which logics will prevail, or at least emerge as the most important in shaping the extension of audit culture into the spheres of higher education and research?  At the moment it appears that the two main logics are:

  • Creating a new niche industry to form markets and generate recurrent revenue; and,
  • Creating new multi-scalar governance technologies to open up previously opaque higher education and research systems, so as to facilitate strategic restructuring for the knowledge economy.

These dual logics are in some ways contradictory, yet in other ways they are interdependent. This is a phenomenon that also has deep roots in the emerging centres of global higher ed and research calculation that are situated in London, Shanghai, New York, Brussels, and Washington DC.  And it is underpinned by the analytical cum revenue generating technologies provided by the Scientific division of Thomson Reuters, which develops and operates the ISI Web of Knowledge.

Market-making and governance enabling…and all unfolding before our very eyes. Yet do we really know enough about the nature of the unfolding process, including the present and absent voices, that seems to be bringing these logics to the fore?

Kris Olds

‘Passing judgment’: the role of credit rating agencies in the global governance of UK universities

This week, one of the two major credit rating agencies in the world, Standard & Poor’s (Moody’s is the other), issued their annual ‘Report Card’ on UK universities. This year’s version is titled UK Universities Enjoy Higher Revenues but Still Face Spending Pressures and it has received a fair bit of attention in media outlets (e.g., the Financial Times and The Guardian). Our thanks to Standard and Poor’s for sending us a copy of the report.

Five UK universities were in the spotlight after having their creditworthiness rated by Standard & Poor’s (S&P’s). In total, S&P’s assesses 20 universities in the UK (5 are made public, the rest are confidential), with 90% of this survey considered by the rating agency to be of high investment grade quality (of A- or above).

Universities in the UK, it would appear from S&P’s Report Card, have had a relatively good year from ‘a credit perspective’. This pronouncement is surely something to celebrate in a year when the word ‘credit crunch’ has become the new metaphor for economic meltdown, and when higher education institutions are likely to be worried about the affects of the sub-prime mortgage lending crisis on loans to students and institutions more generally.

But to the average lay person (or even the average university professor), with a generally low level of financial literacy, what does this all mean? Global ratings agencies passing judgments on UK universities, or policies to drive the sector more generally, or, finally, individual institutional governance decisions?

Three years ago, when one of us (Susan) was delivering an Inaugural Professorial Address at Bristol, S&P’s 2005 report on Bristol (AA/Stable/–) was flashed up, much to the amusement of the audience though to the bemusement of the Chair, a senior university leader. The mild embarrassment of the Chair was largely a consequence of the fact that he was unaware of this judgment on Bristol by a credit rating agency headquartered in New York.

Now the reason for showing S&P’s judgment on the University of Bristol was neither to amuse the audience nor to embarrass the Chair. The point at the time was to sketch out the changing landscape of globalizing education systems within the wider global political economy, to introduce some of the newer (and more private) players who increasingly wield policymaking/shaping power on the sector, to reflect on how these agencies work, and to delineate some of the emerging effects of such developments on the sector.

Our view is that current analyses of globalizing higher education have neglected the role of credit rating agencies in the governance of the higher education sector—as specialized forms of intelligence gathering, shaping and judgment determination on universities. Yet, credit rating agencies are, in many ways, at the heart of contemporary global governance. Witness, for example, the huge debates going on now about establishing a European register for ratings agencies.

The release, then, this week of the S&P’s UK Universities 2008 Report Card, is an opportunity for GlobalHigherEd to sketch out to interested readers a basic understanding of global rating agencies and their relationship to the global governance of higher education.

Rating agencies – origins

Timothy Sinclair, a University of Warwick academic, has been writing for more than a decade on rating agencies and their roles in what he calls the New Global Finance (NGF) (Sinclair, 2000). His various articles and books (see, for example, Sinclair 1994; 2000; 2003; 2005)—some of which are listed below—are worth reading for those of you who want to pursue the topic in greater depth.

Sinclair outlines the early development and subsequent growing importance of credit rating agencies—the masters of capital and second superpowers—arguing that there have been a number of distinct phases in their development.

The first phase dates back to the 1850s, when compendiums of information were produced for American financial markets about large industrial infrastructure developments, such as railroads and canals. However, it was not until the 1907 financial crisis that these early compendiums of information were then used to make judgements about the creditworthiness of debtors (Sinclair, 2003: 148).

‘Rating’ then entered a period of rapid growth from the mid-1930s onwards, as a result of state governments in the US incorporating rating standards into their prudential rules for investment by pension funds.

A third phase began in the 1980s, when new financial innovations (particularly low-rated or junk bonds) were developed, and cheaper offshore non-national money markets were created (that is, places where funds are raised by selling debt obligations and equity outside of the current constraints of government regulation).

However this process, of what Sinclair (1994: 136) calls the ‘disintermediation’ of financing (meaning state regulatory bodies are side-stepped), creates information problems for those wishing to lend money and those wishing to borrow it.

The current phase is now characterized by, on the one hand, greater internationalization of finance, and on the other hand hand, increased significance of capital markets that challenge the role of Banks, as intermediaries.

Credit rating agencies have, as a result, become more important as suppliers of the information with which to make credit-worthiness judgments.

New York-based rating agencies have grown rapidly since then, responding to innovations in financial instruments, on the one hand, and the need for information, on the other. Demand for information has also generated competition within the industry, with some firms operating niche specializations – for instance, as we see with Standards & Poor’s and the higher education sector, itself a subsidiary of publishers McGraw Hill,

Credit rating is big, big business. As Sinclair (2005) notes, the two major credit rating agencies, Moody’s and Standards & Poor’s, pass judgments on around a $30 trillion worth of securities each year. Ratings also affect rates or costs of borrowing, so that the higher the rating, the less risk of default on repayment to the lender and therefore the lower the cost to the borrower.

Universities with different credit ratings will, therefore, be differently placed to borrow – so that the adage of ‘the more you have the more you get’ becomes a major theme.

The rating process

If we look at the detail of the ‘issuer credit rating’ and ‘comments’ in the Report Card of, for instance, the University of Bristol, or King’s College London, we can see that detail is gathered on the financial rating of the issuer; on the industry, competitors, and economy; on legal advice related to the specific issue; on management, policy, business outlook, accounting practices and so on; and on the competitive position, quality of management, long term industry prospects, and wider economic environment. As Sinclair (2003: 150) notes:

The rating agencies are most interested in data on cash flow relative to debt service obligations. They want to know how liquid the company is, and where there will be timely problems likely to hinder repayment. Other information may include five-year financial projections, including income statements and balance sheets, analysis of capital spending plans, financing alternatives, and contingency plans. This information which may not be publicly known is supplemented by agency research into the value of current outstanding obligations, stock valuations and other publicly available data that allows for an inference…

The rating that follows – an opinion on creditworthiness—is generated by an analytical team, a report is prepared with the rating and rationale, this is put to the rating committee made up of senior officials, and a final determination is made in private. The decision is subject to appeal by the issuer. Issuer credit ratings can be either long or short term. S&P use the following nomenclature for long term issue credit ratings (see Bankers Almanac, 2008: 1- 3):

  • AAA – (highest/ extremely strong capacity to meet financial commitments
  • AA – very strong capacity to meet financial commitments
  • A – strong capacity to meet financial commitments, but susceptible to adverse affects of changes in circumstances and economic conditions
  • BBB – adequate capacity to meet financial commitments
  • BB – less vulnerable in the near term than other lower rated obligators, but faces major ongoing uncertainties
  • B – more vulnerable than BB – but adverse business, financial or economic conditions will likely impair obligator’s capacity to meet its financial commitments

Rating higher education institutions

In light of the above discussion, we can now look more closely at the kinds of judgments passed on those universities included in a typical Report Card on the sector by Standards & Poor’s (see 2008: 7).

The 2008 Report Card itself is short; a 9 page document which offers a ‘credit perspective’ on the sector more generally, and on 5 universities. We are told “the UK higher education sector has made positive strides over the past few years, but faces increasing risks in the medium-to-long term” (p. 2).

The Report goes on to note a trebling of tuition fees in the UK, the growth the overseas student market and associated income, an increase in research income for research intensive universities – so that of the 5 universities rated, 1 has been upgraded, another has had its outlook revised to ‘positive’, and no ratings were adjusted for the other three.

The Report also notes (p. 2) that the universities publicly rated by S&P’s are among the leading universities in the UK. To support this claim they refer to another ranking mechanism that is now providing information in the global marketplace – The Times Higher QS World Universities Rankings 2007, which is, as we have noted in a recent entry (‘Euro angsts‘), receiving considerable critical attention in Europe.

However, the Report Card also notes pressures within the system: higher wage demands linked to tuition increases, the search for new researchers to be counted as part of the UK’s Research Assessment Exercise (RAE), global competition for international students, and the heightened expectations of students for better infrastructure as a result of higher fees.

Longer term risks include the fact that by 2020, there will be 16% fewer 18 year olds coming through the system, according to forecasts by Universities UK – with the biggest impact being on the newer universities (in the UK these so-called ‘newer universities’ are previous polytechnics who were given university status in 1992).

Of the 20 UK universities rated in this S&P’s Report, 4 universities are rated AAA; 8 are rated AA; 6 are rated A, and 2 are rated BBB. The University of Bristol, as we can see from the analysts’ rating and comments which we have reproduced below, is given a relatively favorable rating. We have also quoted this rating at length to give you a sense of the kind of commentary made and how this relates to the judgment passed.


Credit rating agencies, as instruments of the global governance of higher education

Credit rating agencies are particularly powerful because both markets and governments see them as authoritative sources of judgment, with the result that they are major actors in controlling access to capital markets. And despite the evident importance of credit rating agencies on the governance of universities in the UK and elsewhere, there is a remarkable lack of attention to this phenomenon. We think there are important questions that need to be researched and the results discussed more widely. For example:

  • How widely spread is the practice?
  • Why are some universities rated whilst others are not?
  • Why are some universities’ ratings considered confidential whilst others are not (keeping in mind that they are all, in the above UK case, public taxpayer supported universities)?
  • Have any universities contested their credit rating, and if so, through what process, and with what outcome?
  • How do university’s management systems respond to these credit ratings, and in what ways might they influence ongoing policy decisions within the university and within the sector?
  • How robust are particular kinds of reputational or status ‘information’, such as World University Rankings, especially if we are looking at creditworthiness?

Our reports on these global rankings show that there are major problems with such measures. As we have profiled, and as has University Ranking Watch and the Beerkens’ Blog, there are clearly unresolved debates and major problems with global ranking schemes.

Clearly market liberalism, of the kind that has characterized this current period of globalization, requires new kinds of intermediaries to provide information for both buyer and seller. And it cannot hurt to have ‘outside’ assessments of the fiscal health of institutions (in this case universities) that are complex, often opaque, and taxpayer supported. However, to experts like Timothy Sinclair (2003), credit rating agencies privatize policymaking, and they can narrow the sphere of government intervention.

For EU Internal Market Commissioner, Charlie McCreevy, the credit ratings agencies like Moody’s and S&P’s contributed to the current financial market turmoil because they underestimated the risks related to their structured credit products. As the Commissioner commented in EurActiv in June.: “No supervisor appears to have got as much as a sniff of the rot at the heart of the structured finance rating process before it all blew up.”

In other words, credit rating agencies lack political accountability and enjoy an ‘accountability gap’. And while efforts are now under way by regulators to close that gap by developing new regulatory frameworks and rules, analysts worry that these private actors will now find new ways around the rules, and in turn facilitate the creation of a riskier financial architecture (as happened with global mortgage markets).

As universities become more financialized, as well as ranked, indexed and barometered in the ways we have been mapping on GlobalHigherEd, such ‘information’ on the sector will also likely be deployed to pass judgment and generate ratings and rankings of ‘creditworthiness’ for universities. The net effect may well be to exaggerate the differences between institutions, to generate greater levels of uneven development within and across the sector, and to increase rather then decrease the opacity and therefore accountability of the sector.

In sum, there is little doubt credit rating agencies, in passing judgments, play a key and increasingly important role in the global governance of higher education. It is also clear from these developments that we need to pay much closer attention to what might be thought of as mundane entities – credit rating agencies – and their role in the global governance of higher education. And we are also hopeful that credit ratings agencies will outline their views on this important dimension of the small g governance of higher education institutions.

Selected References

Bankers Almanac (2008) Standards and Poor’s Definitions, last accessed 5 August 2008.

King, M. and Sinclair, T. (2003) Private actors and public policy: a requiem for the new Basel Capital Accord, International Political Science Review, 24 (3), pp. 345-62.

Sinclair, T. (1994) Passing judgement: credit rating processes as regulatory mechanisms of governance in the emerging world order, Review of International Political Economy, 1 (1), pp. 133-159.

Sinclair, T. (2000) Reinventing authority: embedded knowledge networks and the new global finance, Environment and Planning C: Government and Policy, August 18 (4), pp. 487-502.

Sinclair, T. (2003) Global monitor: bond rating agencies, New Political Economy, 8 (1), pp. 147-161.

Sinclair, T. (2005) The New Masters of Capital: American Bond Rating Agencies and the Politics of Creditworthiness, New York: Cornell University Press.

Standard & Poor’s (2008) Report Card: UK Universities Enjoy Higher Revenues But Still Face Spending Pressures, London: Standards & Poor’s.

Susan Robertson and Kris Olds

Euro angsts, insights and actions regarding global university ranking schemes

The Beerkens’ blog noted, on 1 July, how the university rankings effect has even gone as far as reshaping immigration policy in the Netherlands. He included this extract, from a government policy proposal (‘Blueprint for a modern migration policy’):

Migrants are eligible if they received their degree from a university that is in the top 150 of two international league tables of universities. Because of the overlap, the lists consists of 189 universities…

Quite the authority being vetted in ranking schemes that are still in the process of being hotly debated!

On this broad topic, I’ve been traveling throughout Europe this academic year, pursuing a project not related to rankings, yet again and again rankings come up as a topic of discussion, reminding us of the de-facto global governance power of rankings (and the rankers). Ranking schemes, especially the Shanghai Jiao Tong University’s Academic Ranking of World Universities, and The Times Higher-QS World University Rankings are generating both governance impacts, and substantial anxiety, in multiple quarters.

In response, the European Commission is funding some research and thinking on the topic, while France’s new role in the rotating EU Presidency is supposed to lead to some further focus and attention over the next six months. More generally, here is a random list of European or Europe-based initiatives to examine the nature, impacts, and politics of global rankings:

And here are some recent or forthcoming events:

Yet I can’t help but wonder why Europe, which generally has high quality universities, despite some significant challenges, did not seek to shed light on the pros and cons of the rankings phenomenon any earlier. In other words, despite the critical mass of brainpower in Europe, what has hindered a collective, integrated, and well-funded interrogation of the ranking schemes from emerging before the ranking effects and path dependency started to take hold? Of course there was plenty of muttering, and some early research about rankings, and one could argue that I am viewing this topic through a rear view mirror, but Europe was, arguably, somewhat late in digging into this topic considering how much of an impact these assessment cum governance schemes are having.

So, if absence matters as much as presence in the global higher ed world, let’s ponder the absence of a serious European critique, or at least interrogation of, rankings and the rankers, until now. Let me put forward four possible explanations.

First, action at a European higher education scale has been focused upon bringing the European Higher Education Area to life via the Bologna Process, which was formally initiated in 1999. Thus there were only so many resources – intellectual and material – that could be allocated to higher education, so the Europeans are only now looking outwards to the power of rankings and the rankers. In short, key actors with a European higher education and research development vision have simply been too busy to focus on the rankings phenomenon and its effects.

A second explanation might be that European stakeholders are, deep down, profoundly uneasy about competition with respect to higher education, of which benchmarking and ranking is a part. But, as the Dublin Institute of Technology’s Ellen Hazelkorn notes in Australia’s Campus Review (27 May 2008):

Rankings are the latest weapon in the battle for world-class excellence. They are a manifestation of escalating global competition and the geopolitical search for talent, and are now a driver of that competition and a metaphor for the reputation race. What started out as an innocuous consumer product – aimed at undergraduate domestic students – has become a policy instrument, a management tool, and a transmitter of social, cultural and professional capital for the faculty and students who attend high-ranked institutions….

In the post-massification higher education world, rankings are widening the gap between elite and mass education, exacerbating the international division of knowledge. They inflate the academic arms race, locking institutions and governments into a continual quest for ever increasing resources which most countries cannot afford without sacrificing other social and economic policies. Should institutions and governments allow their higher education policy to be driven by metrics developed by others for another purpose?

It is worth noting that Ellen Hazelkorn is currently finishing an OECD-sponsored study on the effects of rankings.

In short, institutions associated with European higher education did not know how to assertively critique (or at least interrogate) ranking schemes as they never realized, until more recently, how ranking schemes are deeply geopolitical and geoeconomic vehicles that enable the powerful to maintain their standing, and harness yet even more resources inward. Angst regarding competition dulled senses to the intrinsically competitive logic of global university ranking schemes, and the political nature of their being.

Third, perhaps European elites, infatuated as they are with US Ivy League universities, or private institutions like Stanford, just accepted the schemes for the results summarized in this table from an OECD working paper (July 2007) written by Simon Marginson and Marijk van der Wende:

for they merely reinforced their acceptance of one form of American exceptionalism that has been acknowledged in Europe for some time. In other words, can one expect critiques of schemes that identify and peg, at the top, universities that many European elites would kill to send their children to, to emerge? I’m not so sure. As with Asia (where I worked from 1997-2001), and now in Europe, people seem infatuated with the standing of universities like Harvard, MIT, and Princeton, but these universities really operate in a parallel universe. Unless European governments, or the EU, are willing to establish 2-3 universities like King Abdullah University of Science and Technology (KAUST) in Saudi Arabia recently did with a $10 billion endowment, then angling to compete with the US privates should just be forgotten about. The new European Institute of Innovation and Technology (EIT) innovative as it may become, will not rearrange the rankings results, assuming they should indeed be rearranged.

Following what could be defined as a fait accompli phase, national and European political leaders came to progressively view the low status of European universities in the two key rankings schemes – Shanghai, and Times Higher – as a problematic situation. Why? The Lisbon Strategy emerges in 2000, was relaunched in 2005, and slowly starts to generate impacts, while also being continually retuned. Thus, if the strategy is to “become the most competitive and dynamic knowledge-based economy in the world, capable of sustainable economic growth with more and better jobs and greater social cohesion”, how can Europe become such a competitive global force when universities – key knowledge producers – are so far off fast emerging and now hegemonic global knowledge production maps?

In this political context, especially given state control over higher education budgets, and the relaunched Lisbon agenda drive, Europe’s rankers of ranking schemes were then propelled into action, in trebuchet-like fashion. 2010 is, after all, a key target date for a myriad of European scale assessments.

Fourth, Europe includes the UK, despite the feelings of many on both sides of the Channel. Powerful and well-respected institutions, with a wealth of analytical resources, are based in the UK, the global centre of calculation regarding bibliometrics (which rankings are a part of). Yet what role have universities like Oxford, Cambridge, Imperial College, UCL, and so on, or stakeholder organizations like Universities UK (UUK) and the Higher Education Funding Council for England (HEFCE), played in shedding light on the pros and cons of rankings for European institutions of higher education? I might be uninformed but the critiques are not emerging from the well placed, despite their immense experience with bibliometrics. In short as rankings aggregate data that works at a level of abstraction that hoves universities into view, and places UK universities highly (up there with Yale, Harvard and MIT), then these UK universities (or groups like UUK) will inevitably be concerned about their relative position, not the position of the broader regional system of which they are part, nor the rigour of the ranking methodologies. Interestingly, the vast majority of the above initiatives I listed only include representatives from universities that are ranked relatively low by the two main ranking schemes that now hold hegemonic power. I could also speculate on why the French contribution to the regional debate is limited, but will save that for another day.

These are but four of many possible explanations for why European higher education might have been relatively slow to grapple with the power and effects of university ranking schemes considering how much angst and impacts they generate. This said, you could argue, as Eric Beerkens has in the comments section below, that the European response was actually not late off the mark, despite what I argued above. The Shanghai rankings emerged in June 2003, and I still recall the attention they generated when they were first circulated. Three to five years for sustained action in some sectors is pretty quick, while in some sectors it is not.

In conclusion, it is clear that Europe has been destabilized by an immutable mobile – a regionally and now globally understood analytical device that holds together, travels across space, and is placed in reports, ministerial briefing notes, articles, PPT presentations, newspaper and magazine stories, etc. And it is only now that Europe is seriously interrogating the power of such devices, the data and methodologies that underly their production, and the global geopolitics and geoeconomics that they are part and parcel of.

I would argue that it is time to allocate substantial European resources to a deep, sustained, and ongoing analysis of the rankers, their ranking schemes, and associated effects. Questions remain, though, about how much light will be shed on the nature of university rankings schemes, what proposals or alternatives might emerge, and how the various currents of thought in Europe converge or diverge as some consensus is sought. Some institutions in Europe are actually happy that this ‘new reality’ has emerged for it is perceived to facilitate the ‘modernization’ of universities, enhance transparency at an intra-university scale, and elevate the role of the European Commission in European higher education development dynamics. Yet others equate rankings and classification schema with neoliberalism, commodification, and Americanization: this partly explains the ongoing critiques of the typology initiatives I linked to above, which are, to a degree, inspired by the German Excellence initiative, which is in turn partially inspired by a vision of what the US higher education system is.

Regardless, the rankings topic is not about to disappear. Let us hope that the controversies, debates, and research (current and future) inspire coordinated and rigorous European initiatives that will shed more light on this new form of defacto global governance. Why? If Europe does not do it, no one else will, at least in a manner that recognizes the diverse contributions that higher education can and should make to development processes at a range of scales.

Kris Olds

23 July update: see here for a review of a 2 juillet 2008 French Senate proposal to develop a new European ranking system that better reflects the nature of knowledge production (including language) in France and Europe more generally.  The full report (French only) can be downloaded here, while the press release (French only) can be read here.  France is, of course, going to publish a Senate report in French, though the likely target audience for the broader message (including a critique of the Shanghai Jiao Tong University’s Academic Ranking of World Universities) only partially understands French.  Yet in some ways it would have been better to have the report released simultaneously in both French and English.  But the contradictions of France critiquing dominant ranking schemes for their bias towards the English language, in English, was likely too much to take. In the end though, the French critique is well worth considering, and I can’t help but think that the EU or one of the many emerging initiatives above would be wise to have the report immediately translated and placed on some relevant websites so that it can be downloaded for review and debate.

Thomson Reuters, China, and ‘regional’ journals: of gifts and knowledge production

Numerous funding councils, academics, multilateral organizations, media outlets, and firms, are exhibiting enhanced interest in the evolution of the Chinese higher education system, including its role as a site and space of knowledge production. See these three recent contributions, for example:

It is thus noteworthy that the “Scientific business of Thomson Reuters” (as they are now known) has been seeking to position itself as a key analyst of the changing contribution of China-based scholars to the global research landscape. As anyone who has worked in Asia knows, the power of bibliometrics is immense, and quickly becoming more so, within the relevant governance systems that operate across the region. The strategists at Scientific clearly have their eye on the horizon, and are laying the foundations for a key presence in future of deliberations about the production of knowledge in and on China (and the Asia-Pacific more generally).

Thomson and the gift economy

One of the mechanisms to establish a presence and effect is the production of knowledge about knowledge (in this case patents and ISI Web of Science citable articles), as well as gifts. On the gift economy front, yesterday marked the establishment of the first ‘Thomson Reuters Research Fronts Award 2008’, which was jointly sponsored Thomson Reuters and the Chinese Academy of Sciences (CAS) “Research Front Analysis Center”, National Science Library. The awards ceremony was held in the sumptuous setting of the Hotel Nikko New Century Beijing.

As the Thomson Reuters press release notes:

This accolade is awarded to prominent scientific papers and their corresponding authors in recognition of their outstanding pioneering research and influential contribution to international research and development (R&D). The event was attended by over 150 of the winners’ industry peers from leading research institutions, universities and libraries.

The award is significant to China’s science community as it accords global recognition to their collaborative research work undertaken across all disciplines and institutions and highlights their contribution to groundbreaking research that has made China one of the world’s leading countries for the influence of its scientific papers. According to the citation analysis based on data from Scientific’s Web of Science, China is ranked second in the world by number of scientific papers published in 2007. [my emphasis]

Thomson incorporates ‘regional’ journals into the Web of Science

It was also interesting to receive news two days ago that the Scientific business of Thomson Reuters has just added “700 new regional journals” to the ISI Web of Science, journals that “typically target a regional rather than international audience by approaching subjects from a local perspective or focusing on particular topics of regional interest”. The breakdown of newly included journals is below, and was kindly sent to me by Thomson Reuters:

Scientific only admits journals that meet international standard publishing practices, and include notable elements of English so as to enable the data base development process, as noted here:

All journals added to the Web of Science go through a rigorous selection process. To meet stringent criteria for selection, regional journals must be published on time, have English-language bibliographic information (title, abstract, keywords), and cited references must be in the Roman alphabet.

In a general sense, this is a positive development; one that many regionally-focused scholars have been crying out for for years. There are inevitably some issues being grappled with about just which ‘regional’ journals are included, the implications for authors and publishers to include English-language bibliographic information (not cheap on a mass basis), and whether it really matters in the end to a globalizing higher education system that seems to be fixated on international refereed (IR) journal outlets. Still, this is progress of a notable type.

Intellectual Property (IP) generation (2003-2007)

The horizon scanning Thomson Reuters is engaged in generates relevant information for many audiences. For example, see the two graphics below, which track 2003-2007 patent production rates and levels within select “priority countries”. The graphics are available in World IP Today by Thomson Reuters (2008). Click on them for a sensible (for the eye) size.

Noteworthy is the fact that:

China has almost doubled its volume of patents from 2003-2007 and will become a strong rival to Japan and the United States in years to come. Academia represents a key source of innovation in many countries. China has the largest proportion of academic innovation. This is strong evidence of the Chinese Government’s drive to strengthen its academic institutions

Thus we see China as a rapidly increasing producer of IP (in the form of patents), though in a system that is relatively more dependent upon its universities to act as a base for the production process. To be sure private and state-owned enterprises will become more significant over time in China (and Russia), but the relative importance of universities (versus firms or research-only agencies) in the knowledge production landscape is to be noted.

Through the production of such knowledge, technologies, and events, the Scientific business of Thomson Reuters seeks to function as the key global broker of knowledge about knowledge. Yet the role of this institution in providing and reshaping the architecture that shapes ever more scholars’ careers, and ever more higher education systems, is remarkably under-examined.

Kris Olds

ps: alas GlobalHigherEd is still being censored out in China as we use a WordPress.com blogging platform and the Chinese government is blanket censoring all WordPress.com blogs. So much for knowledge sharing!

Australia, be careful what you wish for

Editor’s note: this is the second contribution to GlobalHigherEd by Ellen Hazelkorn, Director, Dublin Institute of Technology, Ireland. Ellen is also Dean of the Faculty of Applied Arts, and Director, Higher Education Policy Research Unit (HEPRU) at DIT. She also works with the OECD’s Programme for Institutional Management of Higher Education (IMHE), including on the impact of rankings on higher education. Ellen’s first, and related, entry for GlobalHigherEd is titled ‘Has higher education become a victim of its own propaganda?‘.
~~~~~~~~~~~~~~~~

When Julie Gillard, MP, the new Australian Labour Party Deputy Prime Minister and Minister for Education, Employment, Workplace Relations and Social Inclusion, opened the recent AFR HE conference, her speech was praised as being the “most positive in 12 years”. Gillard’s speech combined a rousing attack on the conservative Howard government’s policies towards higher education, and society generally, with the promise to usher in “a new era of cooperation…For the first time in many years, Australian universities will have a Federal government that trusts and respects them”. But, are the universities reading the tea-leaves correctly?

Because attention is focused on higher education as a vital indicator of a country’s economic super-power status, universities are regarded as ‘ideal talent-catching machines’ with linkages to the national innovation system. Australia, a big country with a small population, is realising that its global ambitions are constrained by accelerating competition and the vast sums which other countries and regions, e.g., Europe and the US, seem able to invest.

Its dependence on international students, which comprise 17.3% of the student population exceeds the OECD average of 6.7%, but Australia lags behind in the vital postgraduate/PhD student market. Here, international students comprise only 17.8% of the total student population while universities elsewhere have up to 50%. Thus, there is concern that, on a simple country comparison, only 2 Australian universities are included in the top 100 on the Shanghai Jiao Tong ARWU or 8 in the ‘less-considered’ Times QS Ranking of World Universities – albeit if the data were recalibrated for population or GDP, Australia is fourth on both measures sharing this top four ranking with Hong Kong, Singapore, Switzerland and New Zealand. According to Simon Marginson, Australia lacks “truly stellar research universities, now seen as vital attractors of human, intellectual and financial capital in a knowledge economy”

anu.jpgIn response, Ian Chubb, Vice Chancellor, Australia National University (pictured to the left), says the government should abandon its egalitarian policies and preferentially fund a select number of internationally competitive universities while Margaret Gardner, Vice Chancellor, RMIT University, says Australia needs a top university system.

Australia may be able to reconcile these competing and divergent views through more competitive and targeted funding linked to mission (see below on compacts) or, perhaps more controversially, by using the forthcoming HE review (see below) to reaffirm the principles of educational equity while using the complementary innovation review to build-up critical mass in designated fields in select universities. Whichever direction it chooses, it needs to ensure pursuit of its slice of the global knowledge society doesn’t simply become advantageous for the south-east corner of its vast landscape.

Indeed, those who argue that government should fund institutions on the basis of their contribution to the economy and society may find that the metrics used are less kind to them than they think. Not only does research suggest some universities over-inflate their claims, (see Siegfried et al, ‘The Economic Impact of Colleges and Universities’) but better value-for-money and social return on investment may be achievable from improving pre-school or primary education, job chances for 16-19 year olds, building a hospital, or other large-scale facility in the vicinity. Another possibility, in a country which ostensibly values egalitarianism and is committed to balanced regional growth, is that universities ranked lower may become preferred beneficiaries at the expense of more highly ranked institutions. This is exactly the argument that underpinned the first Shanghai Jiao Tong ranking; in other words, the team was anxious to show how poorly Chinese universities were doing vis-à-vis other countries. While Australia’s Go8 universities may seek to use this argument to their advantage, they should also be mindful that poor rankings could incentivize a government to spend more financial resources on weaker institutions (see Zhe Jin and Whalley, 2007). Or, rather than using citations – which it could be argued refers to articles read only by other academics – as a measure or metric of output, impact measurements – including community/regional engagement – could be used to measure contribution to the economy and society. This format may favor a different set of institutions.

heausreview.jpgThe Australian government has begun a review of its HE system. One likely outcome will be the use of negotiated ‘compacts’ between universities and the government which will, in turn, become the basis for determining funding linked to mission and targets. The concept was initially presented in the Australian Labour Party white paper Australia’s Universities: Building our Future in the World (2006):

The mission-based compacts will facilitate diversification of the higher education system, wider student choice and the continuation of university functions of wider community benefit that would otherwise be lost in a purely market-driven system.

Broadly welcomed, these ‘compacts’ are being wildly interpreted as a method of institutional self-definition, on the one hand, or a recipe for micro-management, on the other. They appear to share some characteristics of the Danish system of performance contracts, mentioned in the University Act of 2003 (see section 10.8), and are in line with a trend away from government regulation to steerage by planning. The actual result will probably be somewhere in-between.However, given the time and resources required on both sides to ‘negotiate’, it seems clear this may not be the panacea many universities believe it to be. How much institutional autonomy or self-declaration is realistically possible? At what stage in the negotiations does the government announce the ‘end of talking’?

Another reality-check may be in store as Australian universities celebrate replacement of the Research Quality Framework with the new Excellence in Research for Australia (ERA) initiative which combines metrics with peer evaluation. Whatever arguments against the previous system, several HE leaders claim they had reached a point of near-satisfaction about how research was to be measured, including measuring not just output but also outcome and impact. These issues may need to be re-negotiated under the new system. Another unknown is the extent to which the ‘outcome’ of the ERA itself is linked to ‘compacts’ and research prioritization and concentration – with implications not just for existing fields but new fields of discovery and new research teams.

The challenges for institutions and governments are huge, and the stakes are high and getting higher. To succeed, institutions need to employ the same critical rigorous approach to their arguments that they would expect from their students. Universities everywhere should take note.

Ellen Hazelkorn

Thomson Innovation, UK Research Footprints®, and global audit culture

Thomson Scientific, the private firm fueling the bibliometrics drive in academia, is in the process of positioning itself as the anchor point for data on intellectual property (IP) and research. Following tantalizers in the form of free reports such as World IP Today: A Thomson Scientific Report on Global Patent Activity from 1997-2006 (from which the two images below are taken), Thomson Scientific is establishing, in phases, Thomson Innovation, which will provide, when completed:

  • Comprehensive prior art searching with the ability to search patents and scientific literature simultaneously
  • Expanded Asian patent coverage, including translations of Japanese full-text and additional editorially enhanced abstracts of Chinese data
  • A fully integrated searchable database combining Derwent World Patent Index® (DWPISM) with full-text patent data to provide the most comprehensive patent records available
  • Support of strategic intellectual property decisions through:
    • powerful analysis and visualization tools, such as charting, citation mapping and search result ranking
    • and, integration of business and news resources
  • Enhanced collaboration capabilities, including customizable folder structures that enable users to organize, annotate, search and share relevant files.

thomsonpatent1.jpg

thomsonpatent2.jpg

Speaking of bibliometrics, Evidence Ltd., the private firm that is shaping some of the debates about the post-Research Assessment Exercise (RAE) system of evaluating research quality and impact in UK universities, recently released the UK Higher Education Research Yearbook 2007. This £255 (for higher education customers) report:

[P]rovides the means to gain a rapid overview of the research strengths of any UK Higher Education institution, and compare its performance with that of its peers. It is an invaluable tool for those wishing to assess their own institution’s areas of relative strength and weakness, as well as versatile directory for those looking to invest in UK research. It will save research offices in any organisation with R&D links many months of work, allowing administrative and management staff the opportunity to focus on the strategic priorities that these data will help to inform….

It sets out in clear diagrams and summary tables the research profile for Universities and Colleges funded for research. Research Footprints® compare each institution’s performance to the average for its sector, allowing strengths and weaknesses to be rapidly identified by research managers and by industrial customers.

See below, for one example of how a sample university (in this case the University of Warwick) has its “Research Footprint®” graphically represented. This image is included in a brief article about Warwick by Vice-Chancellor Nigel Thrift, and is available on Warwick’s News & Events website.

warwickfootprint.jpg

sasquatch.jpgGiven the metrics that are utilized, it is clear, even if the data is not published, that individual researchers’ footprints will be available for systematic and comparative analysis, thereby enabling the governance of faculty with the back-up of ‘data’, and the targeted recruitment of the ‘big foot’ wherever s/he resides (though Sasquatches presumably need not apply!).

Kris Olds

Global university rankings 2007: interview with Simon Marginson

Editor’s note: The world is awash in discussion and debate about university (and disciplinary) ranking schemes, and what to do about them (e.g.  see our recent entry on this). Malaysia, for example, is grappling with a series of issues related to the outcome of the recent global rankings schemes, partly spurred on by ongoing developments, but also a new drive to create a differentiated higher education system (including so-called “Apex” universities). In this context Dr. Sarjit Kaur, Associate Research Fellow, IPPTN, Universiti Sains Malaysia, conducted an interview with Simon Marginson, Australian Professorial Fellow and Professor of Higher Education, Centre for the Study of Higher Education, The University of Melbourne. The interview was conducted on 22 November 2007.
~~~~~~~~~~~~~~~~~~~~~~~

Q: What is your overall first impression of the 2007 university rankings?

A: The Shanghai Jiao Tong (SHJT) rankings came out first and the ranking is largely valid. The outcome shows a domination of the large size based universities in the Western world, principally English-speaking countries and principally the US. There are no surprises in that when you look at the fact that the US spends seven times as much on higher education as the next nation, which is Japan, and that is seven times as much as a very big advantage in a competitive sense. The Times Higher Education Supplement (THES) rankings are not valid, in my view, I mean you have a survey which gets 1% return, is biased to certain countries and so on. The outcome tends to show that similar kinds of universities do well as in the top 50 anyway as in the SHJT because research-strong universities also have strong reputations and that shows up strongly in the THES, but the Times one is more plural with major universities in a number of countries (the oldest, largest, and best established universities in a number of countries) appear in the top 100 who aren’t strong enough in research terms to appear in the SHJT. But I don’t put any real value on the Times results – they go up and down very fast. Institutions that are in the top 100 then disappearing from the top 200 two years later, like Universiti Malaya did. It doesn’t mean too much.

Q: In both global university rankings, UK and US universities still dominate the top ten places. What’s your comment on this?

A: Well, it’s predictable that they would dominate in terms of a research measure because they have the largest concentration of research power – publications in English language journals, which mostly are edited from these countries and to their scholars in numbers. The Times is partly driven by research (only 1/5 of it is) and partly driven by the number of international students that people have – they tend to go to the UK and Australia more than they go to US but they tend to be in English-speaking countries as well. At times one half (50%) is determined by reputation as they’re reputational surveys at which one is 40% and the other is 10%. Now, reputation tends to follow established prestige and the English language, where the universities have the prestige as well. But the other factor is that the reputational surveys are biased in favour of countries which use the Times, read the Times and know the Times (usually in the British Empire) so it tends to be UK, Australia, New Zealand, Singapore, Malaysia and Hong Kong that put in a lot of survey returns whereas the Europeans don’t put in many; and many other Asian countries don’t put in many. So, that’s another reason why the English universities would do well. In fact the English universities do very well in the Times rankings – much better than they should really, considering their research strengths.

Q: What’s your comment on how most Asian universities performed in this year’s rankings?

A: Look, I think the SHJT is the one to watch because that gives you realistic measures of performance. The problem with SHJT is it tends to be a bit delayed – so that there’s a delay between the time you performed and the time it shows up in the rankings because the citation and publication measures are operating off the second half of the 90s; in the HiCis, Thomson HiCis count used by SHJT. So when the first half of the 2000 starts to show up, you’re going to see the National University of Singapore go up from the top 200 into the top 100 pretty fast. You will expect the Chinese universities will follow as well, a bit slower, so that Tsinghua and Peking Uni, Fudan, and Jiao Tong itself will move towards the top 200 and top 100 over time because they are really building up to many strengths. That would be a useful trend line to follow. Korean universities are also going to improve markedly in the rankings over time, with Seoul National leading the way. Japan’s already a major presence in the rankings of course. I wouldn’t expect any other Asian country, at this point, to start to show up strongly. It’s not the reason why the Malaysian universities should suddenly move up the research ranking table when they are not investing any more in research than they were before. It will be a long time before Malaysia starts creating an impact in the SHJT because if those China policy tomorrow requires universities to build on their basic research strengths which will involve sending selected people off abroad all the time for PhDs, establishing enough strengths in USM, UKM and UM and a couple more for major research bases at home and to have the capacity to train people at PhD level at home and so on, and be performing a lot of basic research. To do that you have to pay competitive salaries, you got to (like Singapore does) bring people back who might otherwise want to work in the US or UK…and that means paying something like UK salaries or if not, American ones. Then you’ll settle them down, and it’ll take them 5 years before they do their best output. Malaysia is perhaps better at marketing than it is with research performance because it has an International Education sector and because the government is quite active in promoting the university sector offshore and that’s good and that’s how it should be.

Q: What about the performance of Australian universities?

A: They performed as they should in the SHJT, which is to say we got 2 in the top 100. That’s not very good in the sense that when you look at Canada which is a country which is only slightly wealthier and about 2% bigger and a similar kind of culture and quality and it does much better. I mean it has 2 in the top 40 because it spends a lot more on research. Australia would do better in the SHJT if more than just ANU was being funded specially for research. Sydney, Queensland and West Australia were in the top 150, which is not a bad result and New South Wales is in the top 200, Adelaide and Monash were in the top 300 as is Macquarie I think. So it’s 9 in the top 300, which is reasonably good but there’s none in the top 50, which is not good. Australia is not there yet in being regarded a serious research power. In the THES rankings, Australian universities did extremely well because the survey vastly favours those countries which use the Times, know the Times and tend to return the surveys in higher than average numbers and Australia is one of those and because Australia’s International education sector is heavily promoted and because Australia has a lot of international students, which pushes its position up in the Internationalisation indicator. So Australia comes out scoring well in the THES rankings, having 11 universities in the top 100 and that’s just absurd when you look at the actual strengths of Australian universities and even their reputation worldwide, and they’re not strong in the same sense overall as research-based institutions. I’d say the same for British universities too – I mean they did too well. I mean University College London (UCL) this year is 9th in the ranking and stellar institutions like Stanford and University of California Berkeley were 19th and 22nd — this doesn’t make any sense and it’s a ludicrous result.

Q: It is widely acknowledged that in the higher education sector the keys to global competition are research performance and reputation. Do you think the rankings capture these aspects competently?

A: Well, I think the SHJT is not bad with research performance. There’s a lot of ways you can do this and I think using Nobel Prize is not really a good indicator because while the people who receive the prize in the Science and Economics are usually good people; someone said people who are just as good just never receive a prize – you know, because it’s submission-based and it’s all very open; it’s arguable as to whether it’s pure merit. I mean anyone who gets a prize has merit but it doesn’t mean it’s the highest merit of anyone possible that year. Given that the Nobel counts towards 30% of the total, I think it’s probably a little exaggerated in its impact. So I’d take that out and I’ll use something like the citation per head measure, which also appears in the THES rankings actually using similar data but which can be done with the SHJT database as well. But there are a lot of problems – one of the issues is the fact that for some disciplines, for example, cite more than others. Medicine cites much more heavily than engineering so that a university strong in medicine tends to look rather good in the Jiao Tong indicators compared to universities strong in engineering and many of the Chinese and universities in Singapore and Australia too are particularly strong in engineering so that doesn’t help them. But once you start to manipulate the data, you’re on a bit of a slippery slope downwards because there are many other ways you can do it. I think the best measures are probably those developed by Leiden University citation where they control for the size of the university and they control for the disciplines. They don’t take it any further than that and they are very careful and transparent when they do that. So that’s probably the best single set of research outcomes measures but there are always arguments both ways when you’re trying to create a level playing field and recognising true merit. The Times doesn’t measure reputation well when you have a survey with a 1% return rate and which is biased towards 4 or 5 countries and under-represents most of the others. That’s not a good way to measure reputation so we don’t know reputation from the point of view of the world, as the THES are basically UK university rankings.

Q: What kinds of methodological criticisms would you have against the SHJT in comparison to the THES?

A: I don’t think there’s anything that the THES does better; except that the SHJT uses the citation per head measure which is probably a good idea. The SHJT uses a per head measure of research performance as a whole which is probably a less valuable way to take into account size but I think the way Leiden does it is better than either in terms of size measure. That’s the only thing the THES does better and everything else the THES does a good deal worse so I wouldn’t want to implicate the THES in any circumstances. The other problem with the Times is the composite indicator — how do you equate student-staff ratio which is meant to be measured with teaching capacity? How can you give that 20% to research and 20% to reputation? What does that mean? Why? Why not give teaching 50%, why not give research 50%? I mean it’s so arbitrary. There’s no theory at the base of this. It’s just people sitting in a market research company and Times office, guessing about how to best manipulate the sector. The Social Science should be very critical of this kind of thing, regardless of how well or how badly the university is doing.

Q: In your opinion, have these global university rankings gained the trust or the confidence of mainstream public and policy credibility?

A: They’ll always get publicity if they come from apparently authoritative sources and they appear to cover the world. So it’s possible, as with the Times, to develop a bad ranking and get a lot of credibility but the Times now has lost a good deal of ground and the reason why it’s losing credibility, first in the informed circles like Social Science, then with the policy makers, then with the public and the media. And it’s results are so volatile and universities get treated so harshly by going up and down so fast when their performance is not changing. So everyone is now beginning to realize that there is no real relationship between the merit and the university and the outcome of the ranking. And once that happens, the ranking has no ground – it’s gone, it’s finished; and that’s what’s happening to the Times. I mean it will keep coming out for a bit longer but it might stop altogether because its credibility is really reducing now.

Q: To what extent do university rankings help intensify global competition for HiCi researchers or getting international doctoral students or the best postgraduate students?

A: I think the Jiao Tong has had a big impact in focusing attention on the number of countries in getting universities into the top 100 or even the top 500 for that matter (and in some countries the top 50 or top 20) and that is leading in some nations, you could name China and Germany for example, as places where the concentration of research investment is occurring to try to boost the position on individual universities and even disciplines because Jiao Tong also measures mean in 5 discipline areas as well, as does the Times. I think that kind of policy effect will continue and certainly by having a one world ranking, which is incredible such as the Jiao Tong, will help intensify global competition and lead everyone to see the world in terms of a single competition in higher education, particularly in research performance, which focuses attention on the high quality of researchers who comprise most of the research performers. I mean, studies show that 2-5% of researchers in most countries produce more than half of the outcomes in terms of publications and grants. Having this is helpful and it’s a good circumstance.

Q: Do you have any further comments on the issue of whether university rankings are on the right track? What’s your prediction for the future?

A: I think bad rankings tend to undermine themselves over time because their results are not credible. Good ranking systems are open to refinement and improvement and they tend to get stronger, and that’s exactly the case with the Jiao Tong. I think the next frontier with the rankings is the measurement of teaching performance and student quality. The added point of exit — whether it’s done as an evaluated thing or just as a once-off measure. The OECD is in the early stages of developing internationally comparable indicators of student competence – it might use just competency tests like problem solving skills, it may use discipline-based tests in areas like Physics which are common to many countries. It’s more difficult to use disciplines but on the other hand if you just use skills without knowledge, it’s also limited and perhaps open to question. The OECD has got many steps and problems in trying to do this and there are questions as to how this can be done — whether it’s within the frame of the institution or whether through national systems. There are many other questions about this and the technical problems are considerable just to get cross-country measures which are similar but this may well happen when you have ranking capacity on the basis of student outcomes, probably becomes more powerful than research performance in some ways; at least in terms of the international market. I mean research performance probably distinguishes universities from institutions and gives them prestige but teaching outcomes are also important. Once you can measure and establish comparability across countries and measure teaching outcomes that way, then it could be a new world.

End

Quantitative metrics for “research excellence” and global positioning

rgupanel.jpgIn last week’s conference on Realising the Global University, organised by the Worldwide Universities Network (WUN), Professor David Eastwood, Chief Executive of the Higher Education Funding Council for England (HEFCE), spoke several times about the role of funding councils in governing universities and academics to enhance England’s standing in the global higher education sphere (‘market’ is perhaps a more appropriate term given the tone of discussions). One of the interesting dimensions of Eastwood’s position was the uneasy yet dependent relationship HEFCE has on bibliometrics and globally-scaled university ranking schemes to frame the UK’s position, taking into account HEFCE’s influence over funding councils in England, Scotland, Wales and Northern Ireland (which together make up the UK). Eastwood expressed satisfaction with the UK’s relative standing, yet (a) concern about emerging ‘Asian’ countries (well really just China, and to a lesser degree Singapore), (b) the need to compete with research powerhouses (esp., the US), and (c) the need to forge linkages with research powerhouses and emerging ‘contenders’ (ideally via joint UK-US and UK-China research projects, which are likely to lead to more jointly written papers; papers that are posited to generate relatively higher citation counts). These comments help us better understand the opening of a Research Councils UK (RCUK) office in China on 30 October 2007.

hefcecover.jpgIn this context, and further to our 9 November entry on bibliometrics and audit culture, it is worth noting that HEFCE launched a consultation process today about just this – bibliometrics as the core element of a new framework for assessing and funding research, especially with respect to “science-based” disciplines. HEFCE notes that “some key elements in the new framework have already been decided” (i.e., get used to the idea, and quick!), and that the consultation process is instead focused on “how they should be delivered”. Elements of the new framework include (but are not limited to):

  • Subject divisions: within an overarching framework for the assessment and funding of research, there will be distinct approaches for the science-based disciplines (in this context, the sciences, technology, engineering and medicine with the exception of mathematics and statistics) and for the other disciplines. This publication proposes where the boundary should be drawn between these two groups and proposes a subdivision of science-based disciplines into six broad subject groups for assessment and funding purposes.
  • Assessment and funding for the science-based disciplines will be driven by quantitative indicators. We will develop a new bibliometric indicator of research quality. This document builds on expert advice to set out our proposed approach to generating a quality profile using bibliometric data, and invites comments on this.
  • Assessment and funding for the other disciplines: a new light touch peer review process informed by metrics will operate for the other disciplines (the arts, humanities, social sciences and mathematics and statistics) in 2013. We have not undertaken significant development work on this to date. This publication identifies some key issues and invites preliminary views on how we should approach these.
  • Range and use of quantitative indicators: the new funding and assessment framework will also make use of indicators of research income and numbers of research students. This publication invites views on whether additional indicators should be used, for example to capture user value, and if so on what basis.
  • Role of the expert panels: panels made up of eminent UK and international practising researchers in each of the proposed subject groups, together with some research users, will be convened to advise on the selection and use of indicators within the framework for all disciplines, and to conduct the light touch peer review process in non science-based disciplines. This document invites proposals for how their role should be defined within this context.
  • Next steps: the paper identifies a number of areas for further work and sets out our proposed workplan and timetable for developing and introducing the new framework, including further consultations and a pilot exercise to help develop a method for producing bibliometric quality indicators.
  • Sector impact: a key aim in developing the framework will be to reduce the burden on researchers and higher education institutions (HEIs) created by the current arrangements. We also aim for the framework to promote equal opportunities. This publication invites comments on where we need to pay particular attention to these issues in developing the framework and what more can be done.

This process is worth following even if you are not working for a UK institution for it sheds light on the emerging role of bibliometrics as a governing tool (which is evident in more and more countries), especially with respect to the global (re)positioning of national higher education systems vis a vis a particular understandings of ‘research quality’ and ‘productivity’. Over time, of course, it will also transform some of the behaviour of many UK academics, perhaps spurring on everything from heightened competition to get into high citation impact (CIF) factor journals, greater international collaborative work (if such work indeed generates more citations), the possible creation of “citation clubs” (much more easily done, perhaps, that HEFCE realizes), less commitment to high quality teaching, and a myriad of other unknown impacts, for good and for bad, by the time the new framework is “fully driving all research funding” in 2014.

Kris Olds

Citation indices, bibliometrics, and audit culture: new research findings

uukbiblio.jpgThe use of citation indices (bibliometrics, more broadly, which includes article counting and citation counts) is sweeping much of the world of higher education governance. Citation indices, including the Thomson Scientific produced Social Science Citation Index, and the Science Citation Index, and even Google Scholar, are undeniably part of modern academic life, though they are highly contested, reflective and disproportionately supportive of the Anglo-American world (including English as lingua franca), and little understood. Like rankings schemes (which witnessed a flurry of activity this week: see the Times Higher Education Supplement global ranking results, Beerkens’ Blog on the geographies of the new THES rankings, and the Macleans results at a Canadian scale), bibliometrics are used to analyze scholarly activity, and frame the distribution of resources (from the individual up to the institutional scale). They are increasingly used to govern academic life, for good and for bad, and they produce a myriad of impacts that need to be much more fully explored.

The UK makes particularly heavy use of bibliometrics, spurred on by their Research Assessment Exercise (RAE). For this reason UK higher education institutions should, therefore (one would hope!), have a more advanced understanding of the uses and abuses of this analytical cum governance tool. It is thus worth noting that Universities UK (UUK) released a new report today on the topic – The use of bibliometrics to measure research quality in UK higher education institutions – to generate discussion about how to reform the much maligned RAE process. The report was produced Evidence Ltd., a UK “knowledge-based company specializing in data analysis, reports and consultancy focusing on the international research base”. jadams.jpg

Evidence is led by Jonathan Adams, pictured here at a Worldwide Universities Network conference, and the person who wrote an illuminating chapter in the Rand Corporation report (Perspectives on U.S. Competitiveness in Science and Technology) we recently profiled.

Evidence also has a “strategic alliance” with Thomson Scientific (previously known as Thomson ISI) that produces the citation indices noted above.

As the UUK press release notes:

The report assesses the use of bibliometrics in both STEM (science, technology, engineering and mathematics) and non-STEM subjects, and the differences in citation behaviour among subject disciplines.

Professor Eric Thomas, chair of Universities UK’s Research Policy Committee, said: “It is widely anticipated that bibliometrics will be central to the new system, but we need to ensure it is technically correct and able to inspire confidence among the research community.

The report highlights that:

  • Bibliometrics are probably the most useful of a number of variables that could feasibly be used to measure research performance.
  • There is evidence that bibliometric indices do correlate with other, quasi-independent measures of research quality – such as RAE grades – across a range of fields in science and engineering.
  • There is a range of bibliometric variables as possible quality indicators. There are strong arguments against the use of (i) output volume (ii) citation volume (iii) journal impact and (iv) frequency of uncited papers.
  • ‘Citations per paper’ is a widely accepted index in international evaluation. Highly-cited papers are recognised as identifying exceptional research activity.
  • Accuracy and appropriateness of citation counts are a critical factor.
  • There are differences in citation behaviour among STEM and non-STEM as well as different subject disciplines.
  • Metrics do not take into account contextual information about individuals, which may be relevant. They also do not always take into account research from across a number of disciplines.
  • The definition of the broad subject groups and the assignment of staff and activity to them will need careful consideration.
  • Bibliometric indicators will need to be linked to other metrics on research funding and on research postgraduate training.
  • There are potential behavioural effects of using bibliometrics which may not be picked up for some years
  • There are data limitations where researchers’ outputs are not comprehensively catalogued in bibliometrics databases.

See here for one early reaction from the Guardian‘s Donald Macleod. This report, and subsequent reactions, are more food for fodder in ongoing debates about the global higher ed audit culture that is emerging, like it or not…

Kris Olds