Message 1: ‘RAE2008 confirms UK’s dominant position in international research’

Like the launch of a spaceship at Cape Canaveral, the UK Research Assessment Exercise (RAE) is being prepared for full release.  The press release was loaded up 14 minutes ago (and is reprinted below).  Careers, and department futures, will be made and broken when the results emerge in 46 minutes.

Note how they frame the results ever so globally; indeed far more so than in previous RAEs.  I’ll be reporting back tomorrow when the results are out, and I’ve had a chance to unpack what “international” means, and also assess just how “international” the make-up of the review panels — both the main and sub-panels — is (or is not), and what types of international registers were taken into account when assessing ‘quality’. In short, can one self-proclaim a “dominant position” in the international research landscape, and if so on what basis? Leaving aside the intra-UK dynamics (and effects) at work here, this RAE is already turning out to be a mechanism to position a research nation within the global research landscape. But for what purpose?

RAE2008 confirms UK’s dominant position in international research

18 December 2008

The results of the 2008 Research Assessment Exercise (RAE2008) announced today confirm the dominant position that universities and colleges in the United Kingdom hold in international research.

RAE2008, which is based on expert review, includes the views of international experts in all the main subject areas. The results demonstrate that 54% of the research conducted by 52,400 staff submitted by 159 universities and colleges is either ‘world-leading’ (17 per cent in the highest grade) – or ‘internationally excellent’ (37 per cent in the second highest grade).

Taking the top three grades together (the third grade represents work of internationally recognised quality), 87% of the research activity is of international quality. Of the remaining research submitted, nearly all is of recognised national quality in terms of originality, significance and rigour.

Professor David Eastwood, Chief Executive of HEFCE, said:

“This represents an outstanding achievement, confirming that the UK is among the top rank of research powers in the world. The outcome shows more clearly than ever that there is excellent research to be found across the higher education sector. A total of 150 of the 159 institutions have some work of world-leading quality, while 49 have research of the highest quality in all of their submissions.

“The 2008 RAE has been a detailed, thorough and robust assessment of research quality. Producing quality profiles for each submission – rather than single-point ratings – has enabled the panels to exercise finer degrees of judgement. The assessment process has allowed them to take account of the full breadth of research quality, including inter-disciplinary, applied, basic and strategic research wherever it is located.

“Although we cannot make a direct comparison with the previous exercise carried out in 2001, we can be confident that the results are consistent with other benchmarks indicating that the UK holds second place globally to the US in significant subject fields. One of the most encouraging factors is that the panels reported very favourably on the high-quality work undertaken by early career researchers, which will help the UK to maintain this leading position in the future.”

John Denham, Secretary of State for Innovation, Universities and Skills, said:

“The latest RAE reinforces the UK’s position as a world leader in research and I congratulate our universities and colleges for achieving such outstanding results.

“The fact that over 50 per cent of research is either ‘world-leading or ‘internationally excellent’ further confirms that the UK continues to punch above its weight in this crucial field.

“To maintain global excellence during these challenging economic times it will be vital to continue to invest in research, this is why we have committed to fund almost £6bn in research and innovation in England by 2011.”

Key findings:

  • 54% of the research is either ‘world-leading’ (17% in 4*) – or ‘internationally excellent’ (37% in 3*)
  • 1,258 of the 2,363 submissions (53% of total) had at least 50% of their activity rated in the two highest grades. These submissions were found in 118 institutions
  • All the submissions from 16 institutions had at least 50% of their activity assessed as 3* or 4*
  • 84% of all submissions were judged to contain at least 5% world-leading quality research
  • 150 of the 159 higher education institutions (HEIs) that took part in RAE2008 demonstrated at least 5% world-leading quality research in one or more of their submissions
  • 49 HEIs have at least some world-leading quality research in all of their submissions.

The ratings scale, which was included in the press release, is pasted in below:

raescales

Kris Olds

Multi-scalar governance technologies vs recurring revenue: the dual logics of the rankings phenomenon

Our most recent entry (‘University Systems Ranking (USR)’: an alternative ranking framework from EU think-tank‘) is getting heavy traffic these days, a sign that the rankings phenomenon just won’t go away.  Indeed there is every sign that debates about rankings will be heating up over the next 1-2 year in particular, courtesy of the desire of stakeholders to better understand rankings, generate ‘recurring revenue’ off of rankings, and provide new governance technologies to restructure higher education and research systems.

This said I continue to be struck, as I travel to selective parts of the world for work, by the diversity of scalar emphases at play.

eiffeleu1In France, for example, the broad discourse about rankings elevates the importance of the national (i.e., French) and regional (i.e., European) scales, and only then does the university scale (which I will refer to as the institutional scale in this entry) come into play in importance terms. This situation reflects the strong role of the national state in governing and funding France’s higher education system, and France’s role in European development debates (including, at the moment, presidency of the Council of the European Union).

In UK it is the disciplinary/field and then the institutional scales that matter most, with the institutional made up of a long list of ranked disciplines/fields. Once the new Research Assessment Exercise (RAE) comes out in late 2008 we will see the institutional assess the position of each of their disciplines/fields, which will then lead to more support or relatively rapid allocation of the hatchet at the disciplinary/field level. This is in part because much national government funding (via the Higher Education Funding Council for England (HEFCE), the Scottish Funding Council (SFC), the Higher Education Funding Council for Wales (HEFCW) and the Department for Employment and Learning, Northern Ireland (DEL)) to each university is structurally dependent upon the relative rankings of each university’s position in the RAE, which is the aggregate effect of the position of the array of fields/disciplines in any one university (see this list from the University of Manchester for an example). The UK is, of course, concerned about its relative place in the two main global ranking schemes, but it doing well at the moment so the scale of concern is of a lower order than most other countries (including all other European countries). Credit rating agencies also assess and factor in rankings with respect to UK universities (e.g. see ‘Passing judgment’: the role of credit rating agencies in the global governance of UK universities‘).

In the US – supposedly the most marketized of contexts – there is highly variably concern with rankings.  Disciplines/fields ranked by media outlets like U.S. News & World Report are concerned, to be sure, but U.S. News & World Report does not allocate funding. Even the National Research Council (NRC) rankings matter less in the USA given that its effects (assuming it eventually comes out following multiple delays) are more diffuse. The NRC rankings are taken note of by deans and other senior administrators, and also faculty, albeit selectively. Again, there is no higher education system in the US – there are systems. I’ve worked in Singapore, England and the US as a faculty member and the US is by far the least addled or concerned by ranking systems, for good and for bad.

While the diversity of ranking dispositions at the national and institutional levels is heterogeneous in nature, the global rankings landscape is continuing to change, and quickly. In the remainder of this entry we’ll profile but two dimensions of the changes.

Anglo-American media networks and recurrent revenue

ustheFirst, new key media networks, largely Anglo-American private sector networks, have become intertwined.  As Inside Higher Ed put it on 24 November:

U.S. News & World Report on Friday announced a new, worldwide set of university rankings — which is really a repackaging of the international rankings produced this year in the Times Higher Education-QS World University Rankings. In some cases, U.S. News is arranging the rankings in different ways, but Robert Morse, director of rankings at the magazine, said that all data and the methodology were straight from the Times Higher’s rankings project, which is affiliated with the British publication about higher education. Asked if his magazine was just paying for reprint rights, Morse declined to discuss financial arrangements. But he said that it made sense for the magazine to look beyond the United States. “There is worldwide competition for the best faculty, best students and best research grants and researchers,” he said. He also said that, in the future, U.S. News may be involved in the methodology. Lloyd Thacker, founder of the Education Conservancy and a leading critic of U.S. News rankings, said of the magazine’s latest project: “The expansion of a business model that has profited at the expense of education is not surprising. This could challenge leaders to distinguish American higher education by providing better indicators of quality and by helping us think beyond ranking.”

This is an unexpected initiative, in some ways, given that the Times Higher Education-QS World University Rankings are already available on line and US New and World Report is simply repackaging these for sale in the American market. Yet if you adopt a market-making perspective this joint venture makes perfect sense. Annual versions of the Times Higher Education-QS World University Rankings will be reprinted in a familiar (to US readers) format, thereby enabling London-based TSL Education Ltd., London/Paris/Singapore-based QS Quacquarelli Symonds, and Washington DC-based U.S. News and World Report to generate recurring revenue with little new effort (apart from repackaging and distribution in the US). The enabling mechanism is, in this case, reprint rights fees. As we have noted before, this is a niche industry in formation, indeed.

More European angst and action

And second, at the regional level, European angst (an issue we profiled on 6 July in ‘Euro angsts, insights and actions regarding global university ranking schemes‘) about the nature and impact of rankings is leading to the production of critical reports on rankings methodologies, the sponsorship of high powered multi-stakeholder workshops, and the emergence of new proposals for European ranking schemes.

ecjrccoverSee, for example, this newly released report on rankings titled Higher Education Rankings: Robustness Issues and Critical Assessment, which is published by the European Commission Joint Research Centre, Institute for the Protection and Security of the Citizen, Centre for Research on Lifelong Learning (CRELL)

The press release is here, and a detailed abstract of the report is below:

The Academic Ranking of World Universities carried out annually by the Shanghai’s Jiao Tong University (mostly known as the ‘Shanghai ranking’) has become, beyond the intention of its developers, a reference for scholars and policy makers in the field of higher education. For example Aghion and co-workers at the Bruegel think tank use the index – together with other data collected by Bruegel researchers – for analysis of how to reform Europe’s universities, while French President Sarkozy has stressed the need for French universities to consolidate in order to promote their ranking under Jiao Tong. Given the political importance of this field the preparation of a new university ranking system is being considered by the French ministry of education.

The questions addressed in the present analysis is whether the Jiao Tong ranking serves the purposes it is used for, and whether its immediate European alternative, the British THES, can do better.

Robustness analysis of the Jiao Tong and THES ranking carried out by JRC researchers, and of an ad hoc created Jiao Tong-THES hybrid, shows that both measures fail when it comes to assessing Europe’s universities. Jiao Tong is only robust in the identification of the top performers, on either side of the Atlantic, but quite unreliable on the ordering of all other institutes. Furthermore Jiao Tong focuses only on the research performance of universities, and hence is based on the strong assumption that research is a universal proxy for education. THES is a step in the right direction in that it includes some measure of education quality, but is otherwise fragile in its ranking, undeniably biased towards British institutes and somehow inconsistent in the relation between subjective variables (from surveys) and objective data (e.g. citations).

JRC analysis is based on 88 universities for which both the THES and Jiao Tong rank were available. European universities covered by the present study thus constitute only about 0.5% of the population of Europe’s universities. Yet the fact that we are unable to reliably rank even the best European universities (apart from the 5 at the top) is a strong call for a better system, whose need is made acute by today’s policy focus on the reform of higher education. For most European students, teachers or researchers not even the Shanghai ranking – taken at face value and leaving aside the reservations raised in the present study – would tell which university is best in their own country. This is a problem for Europe, committed to make its education more comparable, its students more mobile and its researchers part of a European Research Area.

Various attempts in EU countries to address the issue of assessing higher education performance are briefly reviewed in the present study, which offers elements of analysis of which measurement problem could be addressed at the EU scale. [my emphasis]

While ostensibly “European”, does it really matter that the Times Higher Education-QS World University Ranking is produced by firms with European headquarters, while the Jiao Tong ranking is produced by an institution based in China?

The divergent logics underlying the production of discourses about rankings are also clearly visible in two related statements. At the bottom of the European Commission’s Joint Research Centre report summarized above we see “Reproduction is authorised provided the source is acknowledged”, while the Times Higher Education-QS World University Rankings, a market-making discourse, is accompanied by a lengthy copyright warning that can be viewed here.

Yet do not, for a minute, think that ‘Europe’ does not want to be ranked, or use rankings, as much if not more than any Asian or American or Australian institution. At a disciplinary/field level, for example, debates are quickly unfolding about the European Reference Index for the Humanities (ERIH), a European Science Foundation (ESF) backed initiative that has its origins in deliberations about the role of the humanities in the European Research Area. The ESF frames it this way:

Humanities research in Europe is multifaceted and rich in lively national, linguistic and intellectual traditions. Much of Europe’s Humanities scholarship is known to be first rate. However, there are specifities of Humanities research, that can make it difficult to assess and compare with other sciences. Also,  it is not possible to accurately apply to the Humanities assessment tools used to evaluate other types of research. As the transnational mobility of researchers continues to increase, so too does the transdisciplinarity of contemporary science. Humanities researchers must position themselves in changing international contexts and need a tool that offers benchmarking. This is why ERIH (European Reference Index for the Humanities) aims initially to identify, and gain more visibility for top-quality European Humanities research published in academic journals in, potentially, all European languages. It is a fully peer-reviewed, Europe-wide process, in which 15 expert panels sift and aggregate input received from funding agencies, subject associations and specialist research centres across the continent. In addition to being a reference index of the top journals in 15 areas of the Humanities, across the continent and beyond, it is intended that ERIH will be extended to include book-form publications and non-traditional formats. It is also intended that ERIH will form the backbone of a fully-fledged research information system for the Humanities.

See here for a defense of this ranking system by Michael Worton (Vice-Provost, University College London, and a member of the ERIH steering committee).  I was particularly struck by this comment:

However, the aim of the ERIH is not to assess the quality of individual outputs but to assess dissemination and impact. It can therefore provide something that the RAE cannot: it can be used for aggregate benchmarking of national research systems to determine the international standing of research carried out in a particular discipline in a particular country.

Link here for a Google weblog search on this debate, while a recent Chronicle of Higher Education article (‘New Ratings of Humanities Journals Do More Than Rank — They Rankle’) is also worth reviewing.

Thus we see a new rankings initiative emerging to enable (in theory) Europe to better codify its highly developed humanities presence on the global research landscape, but in a way that will enable national (at the intra-European scale) peaks (and presumably) valleys of quality output to be mapped for the humanities, but also for specific disciplines/fields. Imagine the governance opportunities available, at multiple scales, if this scheme is operationalized.

And finally, at the European scale again, University World News noted, on 23 November, that:

The European Union is planning to launch its own international higher education rankings, with emphasis on helping students make informed choices about where to study and encouraging their mobility. Odile Quintin, the European Commission’s Director-General of Education and Culture, announced she would call for proposals before the end of the year, with the first classification appearing in 2010.

A European classification would probably be compiled along the same lines as the German Centre for Higher Education Development Excellence Ranking.

European actors are being spurred into such action by multiple forces, some internal (including the perceived need to ‘modernize European universities in the context of Lisbon and the European Research Area), some external (Shanghai Jiao Tong; Times Higher QS), and some of a global dimension (e.g., audit culture; competition for mobile students).

eurankingsprogThis latest push is also due to the French presidency of the Council of the European Union, as noted above, which is facilitating action at the regional and national scales. See, for example, details on a Paris-based conference titled ‘International comparison of education systems: a european model?’ which was held on 13-14 November 2008. As noted in the programme, the:

objective of the conference is to bring to the fore the strengths and weaknesses of the different international and European education systems, while highlighting the need for regular and objective assessment of the reforms undertaken by European Member States by means of appropriate indicators. It will notably assist in taking stock of:
– the current state and performance of the different European education systems:
– the ability of the different European education systems to curb the rate of failure in schools,
– the relative effectiveness of amounts spent on education by the different Member States.

The programme and list of speakers is worth perusing to acquire a sense of the broad agenda being put forward.

Multi-scalar governance vs (?) recurring revenue: the emerging dual logics of the rankings phenomenon

The rankings phenomenon is here to stay. But which logics will prevail, or at least emerge as the most important in shaping the extension of audit culture into the spheres of higher education and research?  At the moment it appears that the two main logics are:

  • Creating a new niche industry to form markets and generate recurrent revenue; and,
  • Creating new multi-scalar governance technologies to open up previously opaque higher education and research systems, so as to facilitate strategic restructuring for the knowledge economy.

These dual logics are in some ways contradictory, yet in other ways they are interdependent. This is a phenomenon that also has deep roots in the emerging centres of global higher ed and research calculation that are situated in London, Shanghai, New York, Brussels, and Washington DC.  And it is underpinned by the analytical cum revenue generating technologies provided by the Scientific division of Thomson Reuters, which develops and operates the ISI Web of Knowledge.

Market-making and governance enabling…and all unfolding before our very eyes. Yet do we really know enough about the nature of the unfolding process, including the present and absent voices, that seems to be bringing these logics to the fore?

Kris Olds

UK universities research funding too inflexible warns US uni president

Today’s Financial Times story by David Turner is likely to set the ‘cat amongst the pigeons’ in terms of US-UK higher education relations. Turner reports on an interview with USA Yale university president, Richard Levin, who argues that the UK research funding model simply is not up to the task of delivering world class globally ranked universities. Contrasting the US funding model with the UK, he argues the US model, of selective funding to reward ‘merit’, means that this model is more flexible and “it is also more meritocratic”. The Financial Times reports the Yale president as arguing that

…allocating a large block grant to a university after assessing it for quality department by department results in the weak being pulled up by the strong.

The evidence Yale president Levin points to in support of his case for a more individualistic approach to funding for the UK is that by comparison with the UK, there are a high number of US universities clustered near the top in the global university rankings.

While Levin is of course right, that the US does particularly well on the global university rankings, it is not evident that is is directly an outcome of the individualistic funding strategy for university research by US funding bodies. Rather, the global university ranking system tends to reflect US university strengths and interests (for example, patents, science citation indexes, Nobel awards).

The more important point to be made is how the top US universities do fund their research, as opposed to UK universities. The top US universities have generous endowments, alumni and funds from patents and spin-out companies, while selected universities, such as Stanford and MIT, continue to be the recipients of research funds for military purposes. Top-up research funding comes from various funding agencies, such as the National Science Foundation.

stanford.jpg

By contrast, in the UK the government funds research intensive universities through the Higher Education Funding Council (HEFCE), whilst top-up funding to specific research projects and centers comes from the various funding councils.

Where the US and UK do differ is in how the funding is allocated. In the UK it is the result of a Research Assessment Exercise (RAE) where departments are assessed by panels for the ‘quality’ of their research.

Talk about what will happen to the RAE in the future suggests that it will likely be more individualised assessments, for instance through scores on the various citations indexes. This would then bring the UK’s research funding model more into line with the US, though it is difficult to see how this will be a more meritocratic approach unless there is a reworking of what constitutes merit.

Taken together, unless the UK can challenge and change the basis on which universities are currently ranked globally, it is not likely to alter much the UKs overall place in the global university rankings.

If the UK’s RAE system does more toward a citation/global universities ranking approach, it is likely to embrace a system that neither places a high value on critical and innovative thinking in areas of the social sciences and humanities that are outside of the US’s sphere of intellectual interest and nor on areas of scholarship that do not score well on the global ranking scoreboards. This will surely be a disaster for realising a knowledge society.

Susan Robertson

6 November update: see Martin Knight’s (Chief Operating Officer, Imperial College) 6 November response in the FT.