International university rankings, classifications & mappings – a view from the European University Association

Source: European University Association Newsletter, No. 20, 5 December 2008.

Note: also see ‘Multi-scalar governance technologies vs recurring revenue: the dual logics of the rankings phenomenon

Multi-scalar governance technologies vs recurring revenue: the dual logics of the rankings phenomenon

Our most recent entry (‘University Systems Ranking (USR)’: an alternative ranking framework from EU think-tank‘) is getting heavy traffic these days, a sign that the rankings phenomenon just won’t go away.  Indeed there is every sign that debates about rankings will be heating up over the next 1-2 year in particular, courtesy of the desire of stakeholders to better understand rankings, generate ‘recurring revenue’ off of rankings, and provide new governance technologies to restructure higher education and research systems.

This said I continue to be struck, as I travel to selective parts of the world for work, by the diversity of scalar emphases at play.

eiffeleu1In France, for example, the broad discourse about rankings elevates the importance of the national (i.e., French) and regional (i.e., European) scales, and only then does the university scale (which I will refer to as the institutional scale in this entry) come into play in importance terms. This situation reflects the strong role of the national state in governing and funding France’s higher education system, and France’s role in European development debates (including, at the moment, presidency of the Council of the European Union).

In UK it is the disciplinary/field and then the institutional scales that matter most, with the institutional made up of a long list of ranked disciplines/fields. Once the new Research Assessment Exercise (RAE) comes out in late 2008 we will see the institutional assess the position of each of their disciplines/fields, which will then lead to more support or relatively rapid allocation of the hatchet at the disciplinary/field level. This is in part because much national government funding (via the Higher Education Funding Council for England (HEFCE), the Scottish Funding Council (SFC), the Higher Education Funding Council for Wales (HEFCW) and the Department for Employment and Learning, Northern Ireland (DEL)) to each university is structurally dependent upon the relative rankings of each university’s position in the RAE, which is the aggregate effect of the position of the array of fields/disciplines in any one university (see this list from the University of Manchester for an example). The UK is, of course, concerned about its relative place in the two main global ranking schemes, but it doing well at the moment so the scale of concern is of a lower order than most other countries (including all other European countries). Credit rating agencies also assess and factor in rankings with respect to UK universities (e.g. see ‘Passing judgment’: the role of credit rating agencies in the global governance of UK universities‘).

In the US – supposedly the most marketized of contexts – there is highly variably concern with rankings.  Disciplines/fields ranked by media outlets like U.S. News & World Report are concerned, to be sure, but U.S. News & World Report does not allocate funding. Even the National Research Council (NRC) rankings matter less in the USA given that its effects (assuming it eventually comes out following multiple delays) are more diffuse. The NRC rankings are taken note of by deans and other senior administrators, and also faculty, albeit selectively. Again, there is no higher education system in the US – there are systems. I’ve worked in Singapore, England and the US as a faculty member and the US is by far the least addled or concerned by ranking systems, for good and for bad.

While the diversity of ranking dispositions at the national and institutional levels is heterogeneous in nature, the global rankings landscape is continuing to change, and quickly. In the remainder of this entry we’ll profile but two dimensions of the changes.

Anglo-American media networks and recurrent revenue

ustheFirst, new key media networks, largely Anglo-American private sector networks, have become intertwined.  As Inside Higher Ed put it on 24 November:

U.S. News & World Report on Friday announced a new, worldwide set of university rankings — which is really a repackaging of the international rankings produced this year in the Times Higher Education-QS World University Rankings. In some cases, U.S. News is arranging the rankings in different ways, but Robert Morse, director of rankings at the magazine, said that all data and the methodology were straight from the Times Higher’s rankings project, which is affiliated with the British publication about higher education. Asked if his magazine was just paying for reprint rights, Morse declined to discuss financial arrangements. But he said that it made sense for the magazine to look beyond the United States. “There is worldwide competition for the best faculty, best students and best research grants and researchers,” he said. He also said that, in the future, U.S. News may be involved in the methodology. Lloyd Thacker, founder of the Education Conservancy and a leading critic of U.S. News rankings, said of the magazine’s latest project: “The expansion of a business model that has profited at the expense of education is not surprising. This could challenge leaders to distinguish American higher education by providing better indicators of quality and by helping us think beyond ranking.”

This is an unexpected initiative, in some ways, given that the Times Higher Education-QS World University Rankings are already available on line and US New and World Report is simply repackaging these for sale in the American market. Yet if you adopt a market-making perspective this joint venture makes perfect sense. Annual versions of the Times Higher Education-QS World University Rankings will be reprinted in a familiar (to US readers) format, thereby enabling London-based TSL Education Ltd., London/Paris/Singapore-based QS Quacquarelli Symonds, and Washington DC-based U.S. News and World Report to generate recurring revenue with little new effort (apart from repackaging and distribution in the US). The enabling mechanism is, in this case, reprint rights fees. As we have noted before, this is a niche industry in formation, indeed.

More European angst and action

And second, at the regional level, European angst (an issue we profiled on 6 July in ‘Euro angsts, insights and actions regarding global university ranking schemes‘) about the nature and impact of rankings is leading to the production of critical reports on rankings methodologies, the sponsorship of high powered multi-stakeholder workshops, and the emergence of new proposals for European ranking schemes.

ecjrccoverSee, for example, this newly released report on rankings titled Higher Education Rankings: Robustness Issues and Critical Assessment, which is published by the European Commission Joint Research Centre, Institute for the Protection and Security of the Citizen, Centre for Research on Lifelong Learning (CRELL)

The press release is here, and a detailed abstract of the report is below:

The Academic Ranking of World Universities carried out annually by the Shanghai’s Jiao Tong University (mostly known as the ‘Shanghai ranking’) has become, beyond the intention of its developers, a reference for scholars and policy makers in the field of higher education. For example Aghion and co-workers at the Bruegel think tank use the index – together with other data collected by Bruegel researchers – for analysis of how to reform Europe’s universities, while French President Sarkozy has stressed the need for French universities to consolidate in order to promote their ranking under Jiao Tong. Given the political importance of this field the preparation of a new university ranking system is being considered by the French ministry of education.

The questions addressed in the present analysis is whether the Jiao Tong ranking serves the purposes it is used for, and whether its immediate European alternative, the British THES, can do better.

Robustness analysis of the Jiao Tong and THES ranking carried out by JRC researchers, and of an ad hoc created Jiao Tong-THES hybrid, shows that both measures fail when it comes to assessing Europe’s universities. Jiao Tong is only robust in the identification of the top performers, on either side of the Atlantic, but quite unreliable on the ordering of all other institutes. Furthermore Jiao Tong focuses only on the research performance of universities, and hence is based on the strong assumption that research is a universal proxy for education. THES is a step in the right direction in that it includes some measure of education quality, but is otherwise fragile in its ranking, undeniably biased towards British institutes and somehow inconsistent in the relation between subjective variables (from surveys) and objective data (e.g. citations).

JRC analysis is based on 88 universities for which both the THES and Jiao Tong rank were available. European universities covered by the present study thus constitute only about 0.5% of the population of Europe’s universities. Yet the fact that we are unable to reliably rank even the best European universities (apart from the 5 at the top) is a strong call for a better system, whose need is made acute by today’s policy focus on the reform of higher education. For most European students, teachers or researchers not even the Shanghai ranking – taken at face value and leaving aside the reservations raised in the present study – would tell which university is best in their own country. This is a problem for Europe, committed to make its education more comparable, its students more mobile and its researchers part of a European Research Area.

Various attempts in EU countries to address the issue of assessing higher education performance are briefly reviewed in the present study, which offers elements of analysis of which measurement problem could be addressed at the EU scale. [my emphasis]

While ostensibly “European”, does it really matter that the Times Higher Education-QS World University Ranking is produced by firms with European headquarters, while the Jiao Tong ranking is produced by an institution based in China?

The divergent logics underlying the production of discourses about rankings are also clearly visible in two related statements. At the bottom of the European Commission’s Joint Research Centre report summarized above we see “Reproduction is authorised provided the source is acknowledged”, while the Times Higher Education-QS World University Rankings, a market-making discourse, is accompanied by a lengthy copyright warning that can be viewed here.

Yet do not, for a minute, think that ‘Europe’ does not want to be ranked, or use rankings, as much if not more than any Asian or American or Australian institution. At a disciplinary/field level, for example, debates are quickly unfolding about the European Reference Index for the Humanities (ERIH), a European Science Foundation (ESF) backed initiative that has its origins in deliberations about the role of the humanities in the European Research Area. The ESF frames it this way:

Humanities research in Europe is multifaceted and rich in lively national, linguistic and intellectual traditions. Much of Europe’s Humanities scholarship is known to be first rate. However, there are specifities of Humanities research, that can make it difficult to assess and compare with other sciences. Also,  it is not possible to accurately apply to the Humanities assessment tools used to evaluate other types of research. As the transnational mobility of researchers continues to increase, so too does the transdisciplinarity of contemporary science. Humanities researchers must position themselves in changing international contexts and need a tool that offers benchmarking. This is why ERIH (European Reference Index for the Humanities) aims initially to identify, and gain more visibility for top-quality European Humanities research published in academic journals in, potentially, all European languages. It is a fully peer-reviewed, Europe-wide process, in which 15 expert panels sift and aggregate input received from funding agencies, subject associations and specialist research centres across the continent. In addition to being a reference index of the top journals in 15 areas of the Humanities, across the continent and beyond, it is intended that ERIH will be extended to include book-form publications and non-traditional formats. It is also intended that ERIH will form the backbone of a fully-fledged research information system for the Humanities.

See here for a defense of this ranking system by Michael Worton (Vice-Provost, University College London, and a member of the ERIH steering committee).  I was particularly struck by this comment:

However, the aim of the ERIH is not to assess the quality of individual outputs but to assess dissemination and impact. It can therefore provide something that the RAE cannot: it can be used for aggregate benchmarking of national research systems to determine the international standing of research carried out in a particular discipline in a particular country.

Link here for a Google weblog search on this debate, while a recent Chronicle of Higher Education article (‘New Ratings of Humanities Journals Do More Than Rank — They Rankle’) is also worth reviewing.

Thus we see a new rankings initiative emerging to enable (in theory) Europe to better codify its highly developed humanities presence on the global research landscape, but in a way that will enable national (at the intra-European scale) peaks (and presumably) valleys of quality output to be mapped for the humanities, but also for specific disciplines/fields. Imagine the governance opportunities available, at multiple scales, if this scheme is operationalized.

And finally, at the European scale again, University World News noted, on 23 November, that:

The European Union is planning to launch its own international higher education rankings, with emphasis on helping students make informed choices about where to study and encouraging their mobility. Odile Quintin, the European Commission’s Director-General of Education and Culture, announced she would call for proposals before the end of the year, with the first classification appearing in 2010.

A European classification would probably be compiled along the same lines as the German Centre for Higher Education Development Excellence Ranking.

European actors are being spurred into such action by multiple forces, some internal (including the perceived need to ‘modernize European universities in the context of Lisbon and the European Research Area), some external (Shanghai Jiao Tong; Times Higher QS), and some of a global dimension (e.g., audit culture; competition for mobile students).

eurankingsprogThis latest push is also due to the French presidency of the Council of the European Union, as noted above, which is facilitating action at the regional and national scales. See, for example, details on a Paris-based conference titled ‘International comparison of education systems: a european model?’ which was held on 13-14 November 2008. As noted in the programme, the:

objective of the conference is to bring to the fore the strengths and weaknesses of the different international and European education systems, while highlighting the need for regular and objective assessment of the reforms undertaken by European Member States by means of appropriate indicators. It will notably assist in taking stock of:
– the current state and performance of the different European education systems:
– the ability of the different European education systems to curb the rate of failure in schools,
– the relative effectiveness of amounts spent on education by the different Member States.

The programme and list of speakers is worth perusing to acquire a sense of the broad agenda being put forward.

Multi-scalar governance vs (?) recurring revenue: the emerging dual logics of the rankings phenomenon

The rankings phenomenon is here to stay. But which logics will prevail, or at least emerge as the most important in shaping the extension of audit culture into the spheres of higher education and research?  At the moment it appears that the two main logics are:

  • Creating a new niche industry to form markets and generate recurrent revenue; and,
  • Creating new multi-scalar governance technologies to open up previously opaque higher education and research systems, so as to facilitate strategic restructuring for the knowledge economy.

These dual logics are in some ways contradictory, yet in other ways they are interdependent. This is a phenomenon that also has deep roots in the emerging centres of global higher ed and research calculation that are situated in London, Shanghai, New York, Brussels, and Washington DC.  And it is underpinned by the analytical cum revenue generating technologies provided by the Scientific division of Thomson Reuters, which develops and operates the ISI Web of Knowledge.

Market-making and governance enabling…and all unfolding before our very eyes. Yet do we really know enough about the nature of the unfolding process, including the present and absent voices, that seems to be bringing these logics to the fore?

Kris Olds

‘University Systems Ranking (USR)’: an alternative ranking framework from EU think-tank

One of the hottest issues out there still continuing to attract world-wide attention is university rankings. The two highest profile ranking systems, of course, are the Shanghai Jiao Tong and the Times Higher rankings, both of which focus on what might constitute a world class university, and on the basis of that, who is ranked where. Rankings are also part of an emerging niche industry. All this of course generates a high level of institutional, national, and indeed supranational (if we count Europe in this) angst about who’s up, who’s down, and who’s managed to secure a holding position. And whilst everyone points to the flaws in these ranking systems, these two systems have nevertheless managed to capture the attention and imagination of the sector as a whole. In an earlier blog enty this year GlobalHigherEd mused over why European-level actors had not managed to produce an alternate system of university rankings which might counter the hegemony of the powerful Shanghai Jiao Tong (whose ranking system privileges the US universities) on the one hand, and act as a policy lever that Europe could pull to direct the emerging European higher education system, on the other.

Yesterday The Lisbon Council, an EU think-tank (see our entry here for a profile of this influential think-tank) released which might be considered a challenge to the Shanghai Jiao Tong and Times Higher ranking schemes – a University Systems Ranking (USR) in their report University Systems Ranking Citizens and Society in the Age of Knowledge. The difference between this ranking system and the Shanghai and Times is that it focuses on country-level data and change, and not  individual institutions.

The USR has been developed by the Human Capital Center at The Lisbon Council, Brussels (produced with support by the European Commission’s Education, Audiovisual and Culture Executive Agency) with advice from the OECD.

The report begins with the questions: why do we have university systems? What are these systems intended to do? And what do we expect them to deliver – to society, to individuals and to the world at large? The underlying message in the USR is that “a university system has a much broader mandate than producing hordes of Nobel laureates or cabals of tenure – and patent bearing professors” (p. 6).

So how is the USR different, and what might we make of this difference for the development of universities in the future? The USR is based on six criteria:

  1. Inclusiveness – number of students enrolled in the tertiary sector relative to the size of its population
  2. Access – ability of a country’s tertiary system to accept and help advance students with a low level of scholastic aptitude
  3. Effectiveness – ability of country’s education system to produce graduates with skills relevant to the country’s labour market (wage premia is the measure)
  4. Attractiveness – ability of a country’s system to attract a diverse range of foreign students (using the top 10 source countries)
  5. Age range – ability of a country’s tertiary system to function as a lifelong learning institution (share of 30-39 year olds enrolled)
  6. Responsiveness – ability of the system to reform and change – measured by speed and effectiveness with which Bologna Declaration accepted (15 of 17 countries surveyed have accepted the Bologna criteria.

These are then applied to 17 OECD countries (all but 2 signatories of the Bologna Process). A composite ranging is produced, as well as rankings on each of the criteria. So what were the outcomes for the higher education systems of these 17 countries?

Drawing upon all 6 criteria, a composite figure of USR is then produced. Australia is ranked 1st; the UK 2nd and Denmark 3rd, whilst Austria and Spain are ranked 16th and 17th respectively (see Table1 below). We can also see rankings based on specific criteria (Table 2 below).

thelisboncouncil1

thelisboncouncil2

There is much to be said for this intervention by The Lisbon Council – not the least being that it opens up debates about the role and purposes of universities. Over the past few months there have been numerous heated public interventions about this matter – from whether universities should be little more than giant patenting offices to whether they should be managers of social justice systems.

And though there are evident shortcomings (such as the lack of clarity about what might count as a university; the view that a university-based education is the most suitable form of education to produce a knowledge-based economy and society; what is the equity/access etc range within any one country, and so on), the USR does, at least, place issues like ‘lifelong learning’, ‘access’ and ‘inclusion’ on the reform agenda for universities across Europe. It also sends a message that it has a set of values that currently are not reflected in the two key ranking systems that it would like to advance.

However, the big question now is whether universities will see value in this kind of ranking system for its wider systemic, as opposed to institutional, possibilities, even if it is as a basis for discussing what are universities for and how might we produce more equitable knowledge societies and economies.

Susan Robertson and Roger Dale

New 2008 Shanghai rankings, by rankers who also certify rankers

Benchmarking, and audit culture more generally, are clearly the issues of the week. Following our coverage of a new Standard and Poor’s credit rating report regarding UK universities (‘Passing judgment’: the role of credit rating agencies in the global governance of UK universities‘), the Chronicle of Higher Education just noted that the 2008 Academic Ranking of World Universities (ARWU) (published by Shanghai Jiao Tong University) has been released on the web.

We’ve had more than a few stories about the pros and cons of rankings (e.g., 19 November’s  ‘University rankings: deliberations and future directions‘), but, of course, curiosity killed the cat so I eagerly plunged in for a quick scan.

Leaving aside the individual university scale, one of the most interesting representations of the data they collected, suspect though it might be, is this one:

The geographies, especially the disciplinary/field geographies, are noteworthy on a number of levels. The results are sure to propel the French (currently holding the rotating presidency of the Council of the European Union) into further action re., the deconstruction of the Shanghai methodology, and the development of alternatives (see my reference to this issue in the 6 July entry titled ‘Euro angsts, insights and actions regarding global university ranking schemes’).

I’m also not sure we can rely upon the recently established IREG-International Observatory on Academic Ranking and Excellence to shed unbiased light on the validity of the above table, and all the rest that are sure to be circulated, at the speed of light, through the global higher ed world over the next month or more. Why? Well, the IREG-International Observatory on Academic Ranking and Excellence, established on 18 April 2008, is supposed to:

review the conduct of “academic ranking” and expressions of “academic excellence” for the benefit of higher education, its stake-holders and the general public. This objective will be achieved by way of:

  • improving the standards, theory and practice in line with recommendations formulated in the Berlin Principles on Ranking of Higher Education Institutions;
  • initiating research and training related to ranking excellence;
  • analyzing the impact of ranking on access, recruitment trends and practices;
  • analyzing the role of ranking on institutional behavior;
  • enhancing public awareness and understanding of academic work.

Answering the explicit request of ranking bodies, the Observatory will review and assess selected rankings, based on methodological criteria and deontological standards of the Berlin Principles on Ranking of Higher Education Institutions. Successful ranking will be entities to declare they are “IREG Recognized”.

Now, who established the IREG-International Observatory on Academic Ranking and Excellence? A variety of ‘experts’ (photo below), including people associated with said Shanghai rankings, as well as U.S. News & World Report.

Forgive me if I am wrong, but is it not illogical, best intentions aside, to have rankers themselves on boards of institutions that seek to review “the conduct of ‘academic ranking’ and expressions of ‘academic excellence’ for the benefit of higher education, its stake-holders and the general public”, while also handing out IREG Recognized certifications (including to themselves, I presume)?

Kris Olds

Euro angsts, insights and actions regarding global university ranking schemes

The Beerkens’ blog noted, on 1 July, how the university rankings effect has even gone as far as reshaping immigration policy in the Netherlands. He included this extract, from a government policy proposal (‘Blueprint for a modern migration policy’):

Migrants are eligible if they received their degree from a university that is in the top 150 of two international league tables of universities. Because of the overlap, the lists consists of 189 universities…

Quite the authority being vetted in ranking schemes that are still in the process of being hotly debated!

On this broad topic, I’ve been traveling throughout Europe this academic year, pursuing a project not related to rankings, yet again and again rankings come up as a topic of discussion, reminding us of the de-facto global governance power of rankings (and the rankers). Ranking schemes, especially the Shanghai Jiao Tong University’s Academic Ranking of World Universities, and The Times Higher-QS World University Rankings are generating both governance impacts, and substantial anxiety, in multiple quarters.

In response, the European Commission is funding some research and thinking on the topic, while France’s new role in the rotating EU Presidency is supposed to lead to some further focus and attention over the next six months. More generally, here is a random list of European or Europe-based initiatives to examine the nature, impacts, and politics of global rankings:

And here are some recent or forthcoming events:

Yet I can’t help but wonder why Europe, which generally has high quality universities, despite some significant challenges, did not seek to shed light on the pros and cons of the rankings phenomenon any earlier. In other words, despite the critical mass of brainpower in Europe, what has hindered a collective, integrated, and well-funded interrogation of the ranking schemes from emerging before the ranking effects and path dependency started to take hold? Of course there was plenty of muttering, and some early research about rankings, and one could argue that I am viewing this topic through a rear view mirror, but Europe was, arguably, somewhat late in digging into this topic considering how much of an impact these assessment cum governance schemes are having.

So, if absence matters as much as presence in the global higher ed world, let’s ponder the absence of a serious European critique, or at least interrogation of, rankings and the rankers, until now. Let me put forward four possible explanations.

First, action at a European higher education scale has been focused upon bringing the European Higher Education Area to life via the Bologna Process, which was formally initiated in 1999. Thus there were only so many resources – intellectual and material – that could be allocated to higher education, so the Europeans are only now looking outwards to the power of rankings and the rankers. In short, key actors with a European higher education and research development vision have simply been too busy to focus on the rankings phenomenon and its effects.

A second explanation might be that European stakeholders are, deep down, profoundly uneasy about competition with respect to higher education, of which benchmarking and ranking is a part. But, as the Dublin Institute of Technology’s Ellen Hazelkorn notes in Australia’s Campus Review (27 May 2008):

Rankings are the latest weapon in the battle for world-class excellence. They are a manifestation of escalating global competition and the geopolitical search for talent, and are now a driver of that competition and a metaphor for the reputation race. What started out as an innocuous consumer product – aimed at undergraduate domestic students – has become a policy instrument, a management tool, and a transmitter of social, cultural and professional capital for the faculty and students who attend high-ranked institutions….

In the post-massification higher education world, rankings are widening the gap between elite and mass education, exacerbating the international division of knowledge. They inflate the academic arms race, locking institutions and governments into a continual quest for ever increasing resources which most countries cannot afford without sacrificing other social and economic policies. Should institutions and governments allow their higher education policy to be driven by metrics developed by others for another purpose?

It is worth noting that Ellen Hazelkorn is currently finishing an OECD-sponsored study on the effects of rankings.

In short, institutions associated with European higher education did not know how to assertively critique (or at least interrogate) ranking schemes as they never realized, until more recently, how ranking schemes are deeply geopolitical and geoeconomic vehicles that enable the powerful to maintain their standing, and harness yet even more resources inward. Angst regarding competition dulled senses to the intrinsically competitive logic of global university ranking schemes, and the political nature of their being.

Third, perhaps European elites, infatuated as they are with US Ivy League universities, or private institutions like Stanford, just accepted the schemes for the results summarized in this table from an OECD working paper (July 2007) written by Simon Marginson and Marijk van der Wende:

for they merely reinforced their acceptance of one form of American exceptionalism that has been acknowledged in Europe for some time. In other words, can one expect critiques of schemes that identify and peg, at the top, universities that many European elites would kill to send their children to, to emerge? I’m not so sure. As with Asia (where I worked from 1997-2001), and now in Europe, people seem infatuated with the standing of universities like Harvard, MIT, and Princeton, but these universities really operate in a parallel universe. Unless European governments, or the EU, are willing to establish 2-3 universities like King Abdullah University of Science and Technology (KAUST) in Saudi Arabia recently did with a $10 billion endowment, then angling to compete with the US privates should just be forgotten about. The new European Institute of Innovation and Technology (EIT) innovative as it may become, will not rearrange the rankings results, assuming they should indeed be rearranged.

Following what could be defined as a fait accompli phase, national and European political leaders came to progressively view the low status of European universities in the two key rankings schemes – Shanghai, and Times Higher – as a problematic situation. Why? The Lisbon Strategy emerges in 2000, was relaunched in 2005, and slowly starts to generate impacts, while also being continually retuned. Thus, if the strategy is to “become the most competitive and dynamic knowledge-based economy in the world, capable of sustainable economic growth with more and better jobs and greater social cohesion”, how can Europe become such a competitive global force when universities – key knowledge producers – are so far off fast emerging and now hegemonic global knowledge production maps?

In this political context, especially given state control over higher education budgets, and the relaunched Lisbon agenda drive, Europe’s rankers of ranking schemes were then propelled into action, in trebuchet-like fashion. 2010 is, after all, a key target date for a myriad of European scale assessments.

Fourth, Europe includes the UK, despite the feelings of many on both sides of the Channel. Powerful and well-respected institutions, with a wealth of analytical resources, are based in the UK, the global centre of calculation regarding bibliometrics (which rankings are a part of). Yet what role have universities like Oxford, Cambridge, Imperial College, UCL, and so on, or stakeholder organizations like Universities UK (UUK) and the Higher Education Funding Council for England (HEFCE), played in shedding light on the pros and cons of rankings for European institutions of higher education? I might be uninformed but the critiques are not emerging from the well placed, despite their immense experience with bibliometrics. In short as rankings aggregate data that works at a level of abstraction that hoves universities into view, and places UK universities highly (up there with Yale, Harvard and MIT), then these UK universities (or groups like UUK) will inevitably be concerned about their relative position, not the position of the broader regional system of which they are part, nor the rigour of the ranking methodologies. Interestingly, the vast majority of the above initiatives I listed only include representatives from universities that are ranked relatively low by the two main ranking schemes that now hold hegemonic power. I could also speculate on why the French contribution to the regional debate is limited, but will save that for another day.

These are but four of many possible explanations for why European higher education might have been relatively slow to grapple with the power and effects of university ranking schemes considering how much angst and impacts they generate. This said, you could argue, as Eric Beerkens has in the comments section below, that the European response was actually not late off the mark, despite what I argued above. The Shanghai rankings emerged in June 2003, and I still recall the attention they generated when they were first circulated. Three to five years for sustained action in some sectors is pretty quick, while in some sectors it is not.

In conclusion, it is clear that Europe has been destabilized by an immutable mobile – a regionally and now globally understood analytical device that holds together, travels across space, and is placed in reports, ministerial briefing notes, articles, PPT presentations, newspaper and magazine stories, etc. And it is only now that Europe is seriously interrogating the power of such devices, the data and methodologies that underly their production, and the global geopolitics and geoeconomics that they are part and parcel of.

I would argue that it is time to allocate substantial European resources to a deep, sustained, and ongoing analysis of the rankers, their ranking schemes, and associated effects. Questions remain, though, about how much light will be shed on the nature of university rankings schemes, what proposals or alternatives might emerge, and how the various currents of thought in Europe converge or diverge as some consensus is sought. Some institutions in Europe are actually happy that this ‘new reality’ has emerged for it is perceived to facilitate the ‘modernization’ of universities, enhance transparency at an intra-university scale, and elevate the role of the European Commission in European higher education development dynamics. Yet others equate rankings and classification schema with neoliberalism, commodification, and Americanization: this partly explains the ongoing critiques of the typology initiatives I linked to above, which are, to a degree, inspired by the German Excellence initiative, which is in turn partially inspired by a vision of what the US higher education system is.

Regardless, the rankings topic is not about to disappear. Let us hope that the controversies, debates, and research (current and future) inspire coordinated and rigorous European initiatives that will shed more light on this new form of defacto global governance. Why? If Europe does not do it, no one else will, at least in a manner that recognizes the diverse contributions that higher education can and should make to development processes at a range of scales.

Kris Olds

23 July update: see here for a review of a 2 juillet 2008 French Senate proposal to develop a new European ranking system that better reflects the nature of knowledge production (including language) in France and Europe more generally.  The full report (French only) can be downloaded here, while the press release (French only) can be read here.  France is, of course, going to publish a Senate report in French, though the likely target audience for the broader message (including a critique of the Shanghai Jiao Tong University’s Academic Ranking of World Universities) only partially understands French.  Yet in some ways it would have been better to have the report released simultaneously in both French and English.  But the contradictions of France critiquing dominant ranking schemes for their bias towards the English language, in English, was likely too much to take. In the end though, the French critique is well worth considering, and I can’t help but think that the EU or one of the many emerging initiatives above would be wise to have the report immediately translated and placed on some relevant websites so that it can be downloaded for review and debate.

Has higher education become a victim of its own propaganda?

eh.jpgEditor’s note: today’s guest entry was kindly written by Ellen Hazelkorn, Director, and Dean of the Faculty of Applied Arts, and Director, Higher Education Policy Research Unit (HEPRU), Dublin Institute of Technology, Ireland. She also works with the OECD’s Programme for Institutional Management of Higher Education (IMHE). Her entry should be read in conjunction with some of our recent entries on the linkages and tensions between the Bologna Process and the Lisbon Strategy, the role of foundations and endowments in facilitating innovative research yet also heightening resource inequities, as well as the ever present benchmarking and ranking debates.

~~~~~~~~~

councilpr.jpgThe recent Council of the European Union’s statement on the role of higher education is another in a long list of statements from the EU, national governments, the OECD, UNESCO, etc., proclaiming the importance of higher education (HE) to/for economic development. While HE has long yearned for the time in which it would head the policy agenda, and be rewarded with vast sums of public investment, it may not have realised that increased funding would be accompanied with calls for greater accountability and scrutiny, pressure for value-for-money, and organisational and governance reform. Many critics cite these developments as changing the fundamentals of higher education. Has higher education become the victim of its own propaganda?

At a recent conference in Brussels a representative from the EU reflected on this paradox. The Lisbon Strategy identified a future in which Europe would be a/the leader of the global knowledge economy. But when the statistics were reviewed, there was a wide gap between vision and reality. The Shanghai Academic Ranking of World Universities, which has become the gold standard of worldwide HE rankings, has identified too few European universities among the top 100. This was, he said, a serious problem and blow to the European strategy. Change is required, urgently.

sciencespo.jpgUniversity rankings are, whether we like it or not, beginning to influence the behaviour of higher education institutions and higher education policy because they arguably provide a snap-shot of competition within the global knowledge industrial sector (see E. Hazelkorn, Higher Education Management and Policy, 19:2, and forthcoming Higher Education Policy, 2008). Denmark and France have introduced new legislation to encourage mergers or the formation of ‘pôles’ to enhance critical mass and visibility, while Germany and the UK are using national research rankings or teaching/learning evaluations as a ‘market’ mechanism to effect change. Others, like Germany, Denmark and Ireland, are enforcing changes in institutional governance, replacing elected rectors with corporate CEO-type leadership. Performance funding is a feature everywhere. Even the European Research Council’s method of ‘empowering’ (funding) the researcher rather than the institution is likely to fuel institutional competition.

In response, universities and other HEIs are having to look more strategically at the way they conduct their business, organise their affairs, and the quality of their various ‘products’, e.g., educational programming and research. In return for increased autonomy, governments want more accountability; in return for more funding, governments want more income-generation; in return for greater support for research, governments want to identify ‘winners’; and in return for valuing HE’s contribution to society, governments want measurable outputs (see, for example, this call for an “ombudsman” for higher education in Ireland).

European governments are moving from an egalitarian approach – where all institutions are broadly equal in status and quality – to one in which excellence is promoted through elite institutions, differentiation is encouraged through competitive funding, public accountability is driven by performance measurements or institutional contacts, and student fees are a reflection of consumer buoyancy.

But neither the financial costs nor implications of this strategy – for both governments and institutions – have been thought through. The German government has invested €1.9b over five years in the Excellence Initiative but this sum pales into insignificance compared with claims that a single ‘world class’ university is a $1b – $1.5b annual operation, plus $500m with a medical school, or with other national investment strategies, e.g., China’s $20b ‘211 Project’ or Korea’s $1.2b ‘Brain 21’ programme, or with the fund-raising capabilities of US universities (‘Updates on Billion-Dollar Campaigns at 31 Universities’; ‘Foundations, endowments and higher education: Europe ruminates while the USA stratifies‘).

Given public and policy disdain for increased taxation, if European governments wish to compete in this environment, which policy objectives will be sacrificed? Is the rush to establish ‘world-class’ European universities hiding a growing gap between private and public, research and teaching, elite and mass education? Evidence from Ireland suggests that despite efforts to retain a ‘binary’ system, students are fleeing from less endowed, less prestigious institutes of technology in favour of ‘universities’. At one stage, the UK government promoted the idea of concentrating research activity in a few select institutions/centres until critics, notably the Lambert report and more recently the OECD, argued that regionality does matter.

Europeans are keen to establish a ‘world class’ HE system which can compete with the best US universities. But it is clear that such efforts are being undertaken without a full understanding of the implications, intended and unintended.

Ellen Hazelkorn

OECD ministers meet in January to discuss possible evaluation of “outcomes” of higher education

Further to our last entry on this issue, and a 15 November 2007 story in The Economist, here is an official OECD summary of the Informal OECD Ministerial Meeting on evaluating the outcomes of Higher Education, Tokyo, 11-12 January 2008.  The meetings relate to the perception, in the OECD and its member governments, of an “increasingly significant role of higher education as a driver of economic growth and the pressing need for better ways to value and develop higher education and to respond to the needs of the knowledge society”.

Global university rankings 2007: interview with Simon Marginson

Editor’s note: The world is awash in discussion and debate about university (and disciplinary) ranking schemes, and what to do about them (e.g.  see our recent entry on this). Malaysia, for example, is grappling with a series of issues related to the outcome of the recent global rankings schemes, partly spurred on by ongoing developments, but also a new drive to create a differentiated higher education system (including so-called “Apex” universities). In this context Dr. Sarjit Kaur, Associate Research Fellow, IPPTN, Universiti Sains Malaysia, conducted an interview with Simon Marginson, Australian Professorial Fellow and Professor of Higher Education, Centre for the Study of Higher Education, The University of Melbourne. The interview was conducted on 22 November 2007.
~~~~~~~~~~~~~~~~~~~~~~~

Q: What is your overall first impression of the 2007 university rankings?

A: The Shanghai Jiao Tong (SHJT) rankings came out first and the ranking is largely valid. The outcome shows a domination of the large size based universities in the Western world, principally English-speaking countries and principally the US. There are no surprises in that when you look at the fact that the US spends seven times as much on higher education as the next nation, which is Japan, and that is seven times as much as a very big advantage in a competitive sense. The Times Higher Education Supplement (THES) rankings are not valid, in my view, I mean you have a survey which gets 1% return, is biased to certain countries and so on. The outcome tends to show that similar kinds of universities do well as in the top 50 anyway as in the SHJT because research-strong universities also have strong reputations and that shows up strongly in the THES, but the Times one is more plural with major universities in a number of countries (the oldest, largest, and best established universities in a number of countries) appear in the top 100 who aren’t strong enough in research terms to appear in the SHJT. But I don’t put any real value on the Times results – they go up and down very fast. Institutions that are in the top 100 then disappearing from the top 200 two years later, like Universiti Malaya did. It doesn’t mean too much.

Q: In both global university rankings, UK and US universities still dominate the top ten places. What’s your comment on this?

A: Well, it’s predictable that they would dominate in terms of a research measure because they have the largest concentration of research power – publications in English language journals, which mostly are edited from these countries and to their scholars in numbers. The Times is partly driven by research (only 1/5 of it is) and partly driven by the number of international students that people have – they tend to go to the UK and Australia more than they go to US but they tend to be in English-speaking countries as well. At times one half (50%) is determined by reputation as they’re reputational surveys at which one is 40% and the other is 10%. Now, reputation tends to follow established prestige and the English language, where the universities have the prestige as well. But the other factor is that the reputational surveys are biased in favour of countries which use the Times, read the Times and know the Times (usually in the British Empire) so it tends to be UK, Australia, New Zealand, Singapore, Malaysia and Hong Kong that put in a lot of survey returns whereas the Europeans don’t put in many; and many other Asian countries don’t put in many. So, that’s another reason why the English universities would do well. In fact the English universities do very well in the Times rankings – much better than they should really, considering their research strengths.

Q: What’s your comment on how most Asian universities performed in this year’s rankings?

A: Look, I think the SHJT is the one to watch because that gives you realistic measures of performance. The problem with SHJT is it tends to be a bit delayed – so that there’s a delay between the time you performed and the time it shows up in the rankings because the citation and publication measures are operating off the second half of the 90s; in the HiCis, Thomson HiCis count used by SHJT. So when the first half of the 2000 starts to show up, you’re going to see the National University of Singapore go up from the top 200 into the top 100 pretty fast. You will expect the Chinese universities will follow as well, a bit slower, so that Tsinghua and Peking Uni, Fudan, and Jiao Tong itself will move towards the top 200 and top 100 over time because they are really building up to many strengths. That would be a useful trend line to follow. Korean universities are also going to improve markedly in the rankings over time, with Seoul National leading the way. Japan’s already a major presence in the rankings of course. I wouldn’t expect any other Asian country, at this point, to start to show up strongly. It’s not the reason why the Malaysian universities should suddenly move up the research ranking table when they are not investing any more in research than they were before. It will be a long time before Malaysia starts creating an impact in the SHJT because if those China policy tomorrow requires universities to build on their basic research strengths which will involve sending selected people off abroad all the time for PhDs, establishing enough strengths in USM, UKM and UM and a couple more for major research bases at home and to have the capacity to train people at PhD level at home and so on, and be performing a lot of basic research. To do that you have to pay competitive salaries, you got to (like Singapore does) bring people back who might otherwise want to work in the US or UK…and that means paying something like UK salaries or if not, American ones. Then you’ll settle them down, and it’ll take them 5 years before they do their best output. Malaysia is perhaps better at marketing than it is with research performance because it has an International Education sector and because the government is quite active in promoting the university sector offshore and that’s good and that’s how it should be.

Q: What about the performance of Australian universities?

A: They performed as they should in the SHJT, which is to say we got 2 in the top 100. That’s not very good in the sense that when you look at Canada which is a country which is only slightly wealthier and about 2% bigger and a similar kind of culture and quality and it does much better. I mean it has 2 in the top 40 because it spends a lot more on research. Australia would do better in the SHJT if more than just ANU was being funded specially for research. Sydney, Queensland and West Australia were in the top 150, which is not a bad result and New South Wales is in the top 200, Adelaide and Monash were in the top 300 as is Macquarie I think. So it’s 9 in the top 300, which is reasonably good but there’s none in the top 50, which is not good. Australia is not there yet in being regarded a serious research power. In the THES rankings, Australian universities did extremely well because the survey vastly favours those countries which use the Times, know the Times and tend to return the surveys in higher than average numbers and Australia is one of those and because Australia’s International education sector is heavily promoted and because Australia has a lot of international students, which pushes its position up in the Internationalisation indicator. So Australia comes out scoring well in the THES rankings, having 11 universities in the top 100 and that’s just absurd when you look at the actual strengths of Australian universities and even their reputation worldwide, and they’re not strong in the same sense overall as research-based institutions. I’d say the same for British universities too – I mean they did too well. I mean University College London (UCL) this year is 9th in the ranking and stellar institutions like Stanford and University of California Berkeley were 19th and 22nd — this doesn’t make any sense and it’s a ludicrous result.

Q: It is widely acknowledged that in the higher education sector the keys to global competition are research performance and reputation. Do you think the rankings capture these aspects competently?

A: Well, I think the SHJT is not bad with research performance. There’s a lot of ways you can do this and I think using Nobel Prize is not really a good indicator because while the people who receive the prize in the Science and Economics are usually good people; someone said people who are just as good just never receive a prize – you know, because it’s submission-based and it’s all very open; it’s arguable as to whether it’s pure merit. I mean anyone who gets a prize has merit but it doesn’t mean it’s the highest merit of anyone possible that year. Given that the Nobel counts towards 30% of the total, I think it’s probably a little exaggerated in its impact. So I’d take that out and I’ll use something like the citation per head measure, which also appears in the THES rankings actually using similar data but which can be done with the SHJT database as well. But there are a lot of problems – one of the issues is the fact that for some disciplines, for example, cite more than others. Medicine cites much more heavily than engineering so that a university strong in medicine tends to look rather good in the Jiao Tong indicators compared to universities strong in engineering and many of the Chinese and universities in Singapore and Australia too are particularly strong in engineering so that doesn’t help them. But once you start to manipulate the data, you’re on a bit of a slippery slope downwards because there are many other ways you can do it. I think the best measures are probably those developed by Leiden University citation where they control for the size of the university and they control for the disciplines. They don’t take it any further than that and they are very careful and transparent when they do that. So that’s probably the best single set of research outcomes measures but there are always arguments both ways when you’re trying to create a level playing field and recognising true merit. The Times doesn’t measure reputation well when you have a survey with a 1% return rate and which is biased towards 4 or 5 countries and under-represents most of the others. That’s not a good way to measure reputation so we don’t know reputation from the point of view of the world, as the THES are basically UK university rankings.

Q: What kinds of methodological criticisms would you have against the SHJT in comparison to the THES?

A: I don’t think there’s anything that the THES does better; except that the SHJT uses the citation per head measure which is probably a good idea. The SHJT uses a per head measure of research performance as a whole which is probably a less valuable way to take into account size but I think the way Leiden does it is better than either in terms of size measure. That’s the only thing the THES does better and everything else the THES does a good deal worse so I wouldn’t want to implicate the THES in any circumstances. The other problem with the Times is the composite indicator — how do you equate student-staff ratio which is meant to be measured with teaching capacity? How can you give that 20% to research and 20% to reputation? What does that mean? Why? Why not give teaching 50%, why not give research 50%? I mean it’s so arbitrary. There’s no theory at the base of this. It’s just people sitting in a market research company and Times office, guessing about how to best manipulate the sector. The Social Science should be very critical of this kind of thing, regardless of how well or how badly the university is doing.

Q: In your opinion, have these global university rankings gained the trust or the confidence of mainstream public and policy credibility?

A: They’ll always get publicity if they come from apparently authoritative sources and they appear to cover the world. So it’s possible, as with the Times, to develop a bad ranking and get a lot of credibility but the Times now has lost a good deal of ground and the reason why it’s losing credibility, first in the informed circles like Social Science, then with the policy makers, then with the public and the media. And it’s results are so volatile and universities get treated so harshly by going up and down so fast when their performance is not changing. So everyone is now beginning to realize that there is no real relationship between the merit and the university and the outcome of the ranking. And once that happens, the ranking has no ground – it’s gone, it’s finished; and that’s what’s happening to the Times. I mean it will keep coming out for a bit longer but it might stop altogether because its credibility is really reducing now.

Q: To what extent do university rankings help intensify global competition for HiCi researchers or getting international doctoral students or the best postgraduate students?

A: I think the Jiao Tong has had a big impact in focusing attention on the number of countries in getting universities into the top 100 or even the top 500 for that matter (and in some countries the top 50 or top 20) and that is leading in some nations, you could name China and Germany for example, as places where the concentration of research investment is occurring to try to boost the position on individual universities and even disciplines because Jiao Tong also measures mean in 5 discipline areas as well, as does the Times. I think that kind of policy effect will continue and certainly by having a one world ranking, which is incredible such as the Jiao Tong, will help intensify global competition and lead everyone to see the world in terms of a single competition in higher education, particularly in research performance, which focuses attention on the high quality of researchers who comprise most of the research performers. I mean, studies show that 2-5% of researchers in most countries produce more than half of the outcomes in terms of publications and grants. Having this is helpful and it’s a good circumstance.

Q: Do you have any further comments on the issue of whether university rankings are on the right track? What’s your prediction for the future?

A: I think bad rankings tend to undermine themselves over time because their results are not credible. Good ranking systems are open to refinement and improvement and they tend to get stronger, and that’s exactly the case with the Jiao Tong. I think the next frontier with the rankings is the measurement of teaching performance and student quality. The added point of exit — whether it’s done as an evaluated thing or just as a once-off measure. The OECD is in the early stages of developing internationally comparable indicators of student competence – it might use just competency tests like problem solving skills, it may use discipline-based tests in areas like Physics which are common to many countries. It’s more difficult to use disciplines but on the other hand if you just use skills without knowledge, it’s also limited and perhaps open to question. The OECD has got many steps and problems in trying to do this and there are questions as to how this can be done — whether it’s within the frame of the institution or whether through national systems. There are many other questions about this and the technical problems are considerable just to get cross-country measures which are similar but this may well happen when you have ranking capacity on the basis of student outcomes, probably becomes more powerful than research performance in some ways; at least in terms of the international market. I mean research performance probably distinguishes universities from institutions and gives them prestige but teaching outcomes are also important. Once you can measure and establish comparability across countries and measure teaching outcomes that way, then it could be a new world.

End

Reactions to the ranking of universities: is Malaysia over-reacting?

thesqscover.jpgI have had a chance to undertake a quick survey among colleagues in other countries regarding reactions to the UK’s Times Higher World University Rankings 2007 in their respective countries.

A colleague in the UK noted that as one might expect from the home of one of the more notorious world rankings, and a higher education system obsessed with reputation, ‘league tables’ are much discussed in the UK. The UK government, specifically, the Higher Education Funding Council for England (HEFCE), as noted last week, has commissioned a major research into five ranking systems and their impact on higher education institutions in England. In other words, the UK government is very concerned with the whole business of ranking of universities, for the reputation of the UK as a global centre for higher education is at stake.

Another colleague reported that, among academics in the UK, that the reaction to the Times Higher rankings varies widely. Many people working in higher education are deeply sceptical and cynical about the value of such league tables, about their value, purpose and especially methodology. For the majority of UK universities that do not appear in the tables and are probably never likely to appear, the tables are of very little significance. However, for the main research-led universities they are a source of growing interest. These are the universities that see themselves as competing on the world stage. Whilst they will often criticise the methodologies in detail, they will still study the results very carefully and will certainly use good results for publicity and marketing. Several leading UK universities (e.g., Warwick) now have explicit targets, for example, to be in the top 25 or 50 by a particular year, and are developing strategies with this in mind. However, it is reported that most UK students pay little attention to the international tables, but universities are aware that rankings can have a significant impact on recruitment of international students.

In Hong Kong, the Times Higher rankings has been seriously discussed in both the media and by university presidents (some of whom received higher rankings this year, thus making it easier to request increased funding from government based on their success). Among scholars/academics, especially those familiar with the various university ranking systems (the Times Higher rankings and others, like the Shanghai Jiaotong University rankings), there is some scepticism, especially concerning the criteria used.

Rankings are a continuous source of debate in the Australian system, no doubt as a result of Australia’s strong focus on the international market. Both the Times Higher rankings and the recent one undertaken by the Melbourne Institute have resulted in quite strong debate, spurred by Vice Chancellors whose institutions do not score in the top.

In Brazil, it is reported that ranking of universities did not attract media attention and public debate for the very reason that university rankings have had no impact on the budgetary decision of the government. The more relevant issue in the higher education agenda in Brazil is social inclusion, thus public universities are rewarded by their plans for extending access to their undergraduate programs, especially if it includes large number of students per faculty. Being able to attract foreign students is secondary in nature to many universities. Thus, public universities have had and continue to have assured access to budget streams that reflects the Government’s historical level of commitment.

A colleague in France noted that the manner Malaysia, especially the Malaysian Cabinet of Ministers and the Parliament, reacted to Times Higher rankings is relatively harsh. It appears that, in the specific case of Malaysia, the ranking outcome is being used by politicians to ‘flog’ senior officials governing higher education systems and/or universities. And yet critiques of such ranking schemes and their methodologies (e.g., via numerous discussions in Malaysia, or via the OECD or University Ranking Watch) go unnoticed. Malaysia better watch out, as the world is indeed watching us.

Morshidi Sirat

Quantitative metrics for “research excellence” and global positioning

rgupanel.jpgIn last week’s conference on Realising the Global University, organised by the Worldwide Universities Network (WUN), Professor David Eastwood, Chief Executive of the Higher Education Funding Council for England (HEFCE), spoke several times about the role of funding councils in governing universities and academics to enhance England’s standing in the global higher education sphere (‘market’ is perhaps a more appropriate term given the tone of discussions). One of the interesting dimensions of Eastwood’s position was the uneasy yet dependent relationship HEFCE has on bibliometrics and globally-scaled university ranking schemes to frame the UK’s position, taking into account HEFCE’s influence over funding councils in England, Scotland, Wales and Northern Ireland (which together make up the UK). Eastwood expressed satisfaction with the UK’s relative standing, yet (a) concern about emerging ‘Asian’ countries (well really just China, and to a lesser degree Singapore), (b) the need to compete with research powerhouses (esp., the US), and (c) the need to forge linkages with research powerhouses and emerging ‘contenders’ (ideally via joint UK-US and UK-China research projects, which are likely to lead to more jointly written papers; papers that are posited to generate relatively higher citation counts). These comments help us better understand the opening of a Research Councils UK (RCUK) office in China on 30 October 2007.

hefcecover.jpgIn this context, and further to our 9 November entry on bibliometrics and audit culture, it is worth noting that HEFCE launched a consultation process today about just this – bibliometrics as the core element of a new framework for assessing and funding research, especially with respect to “science-based” disciplines. HEFCE notes that “some key elements in the new framework have already been decided” (i.e., get used to the idea, and quick!), and that the consultation process is instead focused on “how they should be delivered”. Elements of the new framework include (but are not limited to):

  • Subject divisions: within an overarching framework for the assessment and funding of research, there will be distinct approaches for the science-based disciplines (in this context, the sciences, technology, engineering and medicine with the exception of mathematics and statistics) and for the other disciplines. This publication proposes where the boundary should be drawn between these two groups and proposes a subdivision of science-based disciplines into six broad subject groups for assessment and funding purposes.
  • Assessment and funding for the science-based disciplines will be driven by quantitative indicators. We will develop a new bibliometric indicator of research quality. This document builds on expert advice to set out our proposed approach to generating a quality profile using bibliometric data, and invites comments on this.
  • Assessment and funding for the other disciplines: a new light touch peer review process informed by metrics will operate for the other disciplines (the arts, humanities, social sciences and mathematics and statistics) in 2013. We have not undertaken significant development work on this to date. This publication identifies some key issues and invites preliminary views on how we should approach these.
  • Range and use of quantitative indicators: the new funding and assessment framework will also make use of indicators of research income and numbers of research students. This publication invites views on whether additional indicators should be used, for example to capture user value, and if so on what basis.
  • Role of the expert panels: panels made up of eminent UK and international practising researchers in each of the proposed subject groups, together with some research users, will be convened to advise on the selection and use of indicators within the framework for all disciplines, and to conduct the light touch peer review process in non science-based disciplines. This document invites proposals for how their role should be defined within this context.
  • Next steps: the paper identifies a number of areas for further work and sets out our proposed workplan and timetable for developing and introducing the new framework, including further consultations and a pilot exercise to help develop a method for producing bibliometric quality indicators.
  • Sector impact: a key aim in developing the framework will be to reduce the burden on researchers and higher education institutions (HEIs) created by the current arrangements. We also aim for the framework to promote equal opportunities. This publication invites comments on where we need to pay particular attention to these issues in developing the framework and what more can be done.

This process is worth following even if you are not working for a UK institution for it sheds light on the emerging role of bibliometrics as a governing tool (which is evident in more and more countries), especially with respect to the global (re)positioning of national higher education systems vis a vis a particular understandings of ‘research quality’ and ‘productivity’. Over time, of course, it will also transform some of the behaviour of many UK academics, perhaps spurring on everything from heightened competition to get into high citation impact (CIF) factor journals, greater international collaborative work (if such work indeed generates more citations), the possible creation of “citation clubs” (much more easily done, perhaps, that HEFCE realizes), less commitment to high quality teaching, and a myriad of other unknown impacts, for good and for bad, by the time the new framework is “fully driving all research funding” in 2014.

Kris Olds

University rankings: deliberations and future directions

I attended a conference (the Worldwide Universities Network-organised Realizing the Global University, with a small pre-event workshop) and an Academic Cooperation Association-organised workshop (Partners and Competitors: Analysing the EU-US Higher Education Relationship) last week. Both events were well run and fascinating. I’ll be highlighting some key themes and debates that emerged in them throughout several entries in GlobalHigherEd over the next two weeks.

top10cites.jpg

One theme that garnered a significant amount of attention in both places was the ranking of universities (e.g., see one table here from the recent Times Higher Education Supplement-QS ranking that was published a few weeks ago). In both London and Brussels stakeholders of all sorts spoke out, in mainly negative tones, about the impacts of ranking schemes. They highlighted all of the usual critiques that have emerged over the last several years; critiques that are captured in the:

Suffice it to say everyone is “troubled by” (“detests”, “rejects”, “can’t stand”, “loathes”, “abhors”, etc…) ranking schemes but at the same time the schemes are used when seen fit, which usually means by relatively highly ranked institutions and systems to legitimize their standing in the global higher ed world, or to (e.g., see the case of Malaysia) flog the politicians and senior officials governing higher education systems and/or universities.

If ranking schemes are here to stay, as they seem to be (despite the Vice-Chancellor of the University of Bristol emphasizing in London that “we only have ourselves to blame”), four themes emerged as to where the global higher ed world might be heading with respect to rankings:

iheprankings.jpg(1) Critique and reformulation. If rankings schemes are here to stay, as credit ratings agencies’ (e.g., Standard & Poor’s) products also are, then the schemes need to be more effectively and forcefully critiqued with an eye to the reformulation of existing methodologies. The Higher Education Funding Council for England (HEFCE), for example, is conducting research on ranking schemes, with a large report due to be released in February 2008. This comes on the back of the Institute for Higher Education Policy’s large “multi-year project to examine the ways in which college and university ranking systems influence decision making at the institutional and government policy levels”, and a multi-phase study by the OECD on the impact of rankings in select countries (see Phase I results here). On a related note, the three-year old Faculty Scholarly Productivity Index is continually being developed in response to critiques, though I also know of many faculty and administrators who think it is beyond repair.

(2) Extending the power and focus of rankings. This week’s Economist notes that the OECD is developing a plan for a January 2008 meeting of member education ministers where they will seek approval to “[L]ook at the end result—how much knowledge is really being imparted”. What this means, in the words of Andreas Schleicher, the OECD’s head of education research, is that rather “than assuming that because a university spends more it must be better, or using other proxy measures for quality, we will look at learning outcomes”. The article notes that the first rankings should be out in 2010, and that:

[t]he task the OECD has set itself is formidable. In many subjects, such as literature and history, the syllabus varies hugely from one country, and even one campus, to another. But OECD researchers think that problem can be overcome by concentrating on the transferable skills that employers value, such as critical thinking and analysis, and testing subject knowledge only in fields like economics and engineering, with a big common core.

Moreover, says Mr Schleicher, it is a job worth doing. Today’s rankings, he believes, do not help governments assess whether they get a return on the money they give universities to teach their undergraduates. Students overlook second-rank institutions in favour of big names, even though the less grand may be better at teaching. Worst of all, ranking by reputation allows famous places to coast along, while making life hard for feisty upstarts. “We will not be reflecting a university’s history,” says Mr Schleicher, “but asking: what is a global employer looking for?” A fair question, even if not every single student’s destiny is to work for a multinational firm.

Leaving aside the complexities and politics of this initiative, the OECD is, yet again, setting the agenda for the global higher ed world.

apollofinancials.jpg(3) Blissful ignorance. The WUN event had a variety of speakers from the private for profit world, including Jorge Klor de Alva, Senior Vice-President, Academic Excellence and Director of University of Phoenix National Resource Center. The University of Phoenix, for those of you who don’t know, is part of the Apollo Group, has over 250,000 students, and is highly profitable with global ambitions. I attended a variety of sessions where people like Klor de Alva spoke and they could really care less about ranking schemes for their target “market” is a “non-traditional” one that tends not to matter (to date) to the rankers. Revenue, operating margin and income, and net income (e.g., see Apollo’s financials from their 2006 Annual Report to the left), and the views of Wall Street analysts (but not the esteem of the intelligentsia), are what matter instead for these type of players.

(4) Performance indicators for “ordinary” universities. Several speakers and commentators suggested that the existing ranking schemes were frustrating to observe from the perspective of universities not ‘on the map’, especially if they would realistically never get on the map. Alternative schemes were discussed, including performance indicators that reflected the capacity of universities to meet local and regionally-specific needs; needs that are often ignored by highly ranked universities or the institutions developing the ranking methodologies. Thus a university could feel better or worse depending on how it does over time in the performance rankings. This perspective is akin to that put forward by Jennifer Robinson in her brilliant critique of the global cities ranking schemes that exist. Robinson’s book, Ordinary Cities: Between Modernity and Development (Routledge, 2006) is well worth reading if this is your take on ranking schemes.

The future directions that ranking schemes will take are uncertain, though what is certain is that when the OECD and major funding councils start to get involved then the politics of university rankings cannot help but heat up even more. This said presence and voice regarding the (re)shaping of schemes with distributional impacts always needs to be viewed in conjunction with attention to absence and silence.

Kris Olds