CRELL: critiquing global university rankings and their methodologies

This guest entry has been kindly prepared for us by Beatrice d’Hombres and Michaela Saisana of the EU-funded Centre for Research on Lifelong Learning (CRELL) and Joint Research Centre. This entry is part of a series on the processes and politics of global university rankings (see herehere, here and here).

beatriceSince 2006, Beatrice d’Hombres has been working in the Unit of Econometrics and Statistics of the Joint Research Centre of  the European Commission. She is part of the Centre for Research on Lifelong Learning. Beatrice is an economist who completed a PhD at the University of Auvergne (France). She has a particular expertise in education economics and applied econometrics.

michaela

Michaela Saisana works for the Joint Research Centre (JRC) of the European Commission at the Unit of Econometrics and Applied Statistics. She has a PhD in Chemical Engineering and in 2004 she won the European Commission – JRC Young Scientist Prize in Statistics and Econometrics for her contribution on the robustness assessment of composite indicators and her work on sensitivity analysis.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The expansion of the access to higher education, the growing mobility of students, the need for economic rationale behind the allocation of public funds, together with the demand for higher accountability and transparency, have all contributed to raise the need for comparing university quality across countries.

The recognition of this fact has also been greatly stirred  by the publication, since 2003, of the ‘Shanghai Jiao Tong University Academic Ranking of World Universities’ (henceforth SJTU), which measures university research performance across the world. The SJTU ranking tends to reinforce the evidence that the US is well ahead of Europe in terms of cutting-edge university research.

Its rival is the ranking computed annually, since 2004, by the Times Higher Education Supplement (henceforth THES). Both these rankings are now receiving worldwide attention and constitute an occasion for national governments to comment on the relative performances of their national universities.

In France, for example, the publication of the SJTU is always associated with a surge of articles in newspapers which either bemoan  the poor performance of French universities or denounce the inadequacy of the SJTU ranking to properly assess the attractiveness of the fragmented French higher education institutions landscape (see Les Echos, 7 August 2008).

Whether the intention of the rankers or not, university rankings have followed a destiny of their own and are used by national policy makers to stimulate debates about national university systems and ultimately can lead to specific education policies orientations.

At the same time, however, these rankings are subject to a plethora of criticism. They outline that the chosen indicators are mainly based on research performance with no attempt to take into account the others missions of universities (in particular teaching), and are biased towards large, English-speaking and hard-science institutions. Whilst the limitations of the indicators underlying the THES or the SJTU rankings have been extensively discussed in the relevant literature, there has been no attempt so far to examine in depth the volatility of the university ranks to the methodological assumptions made in compiling the rankings.

crell3The purpose of the JRC/Centre for Research on Lifelong Learning (CRELL) report is to fill in this gap by quantifying how much university rankings depend on the methodology and to reveal whether the Shanghai ranking serves the purposes it is used for, and if its immediate European alternative, the British THES, can do better.

To that end, we carry out a thorough uncertainty and sensitivity analysis of the 2007 SJTU and THES rankings under a plurality of scenarios in which we activate simultaneously different sources of uncertainty. The sources cover a wide spectrum of methodological assumptions (set of selected indicators, weighting scheme, and aggregation method).

This implies that we deviate from the classic approach – also taken in the two university ranking systems – to build a composite indicator by a simple weighted summation of indicators. Subsequently, a frequency matrix of the university ranks is calculated across the different simulations. Such a multi-modeling approach and the presentation of the frequency matrix, rather than the single ranks, allows one to deal with the criticism, often made to league tables and rankings systems ,that ranks are presented as if they were calculated under conditions of certainty while this is rarely the case.  crell

The main findings of the report are the following. Both rankings are only robust in the identification of the top 15 performers on either side of the Atlantic, but unreliable on the exact ordering of all other institutes. And, even when combining all twelve indicators in a single framework, the space of the inference is too wide for about 50 universities of the 88 universities we studied and thus no meaningful rank can be estimated for those universities. Finally, the JRC report suggests that THES and SJTU rankings should be improved along two main directions:

  • first, the compilation of university rankings should always be accompanied by a robustness analysis based on a multi-modeling approach. We believe that this could constitute an additional recommendation to be added to the already 16 existing Berlin Principles;
  • second, it is necessary to revisit the set of indicators, so as to enrich it with other dimensions that are crucial to assessing university performance and which are currently missing.

Beatrice d’Hombres  and Michaela Saisana

European ambitions: towards a ‘multi-dimensional global university ranking’

Further to our recent entries on European reactions and activities in relationship to global rankings schemes:

and a forthcoming guest contribution to SHIFTmag: Europe Talks to Brussels, ranking(s) watchers should examine this new tender for a €1,100,000 (maximum) contract for the ‘Design and testing the feasibility of a Multi-dimensional Global University Ranking’, to be completed by 2011.

dgecThe Terms of Reference, which hs been issued by the European Commission, Directorate-General for Education and Culture, is particularly insightful, while this summary conveys the broad objectives of the initiative:

The new ranking to be designed and tested would aim to make it possible to compare and benchmark similar institutions within and outside the EU, both at the level of the institution as a whole and focusing on different study fields. This would help institutions to better position themselves and improve their development strategies, quality and performances. Accessible, transparent and comparable information will make it easier for stakeholders and, in particular, students to make informed choices between the different institutions and their programmes. Many existing rankings do not fulfil this purpose because they only focus on certain aspects of research and on entire institutions, rather than on individual programmes and disciplines. The project will cover all types of universities and other higher education institutions as well as research institutes.

The funding is derived out of the Lifelong Learning policy and program stream of the Commission.

Thus we see a shift, in Europe, towards the implementation of an alternative scheme to the two main global ranking schemes, supported by substantial state resources at a regional level. It will be interesting to see how this eventual scheme complements and/or overturns the other global ranking schemes that are products of media outlets, private firms, and Chinese universities.

Kris Olds

International university rankings, classifications & mappings – a view from the European University Association

Source: European University Association Newsletter, No. 20, 5 December 2008.

Note: also see ‘Multi-scalar governance technologies vs recurring revenue: the dual logics of the rankings phenomenon

Multi-scalar governance technologies vs recurring revenue: the dual logics of the rankings phenomenon

Our most recent entry (‘University Systems Ranking (USR)’: an alternative ranking framework from EU think-tank‘) is getting heavy traffic these days, a sign that the rankings phenomenon just won’t go away.  Indeed there is every sign that debates about rankings will be heating up over the next 1-2 year in particular, courtesy of the desire of stakeholders to better understand rankings, generate ‘recurring revenue’ off of rankings, and provide new governance technologies to restructure higher education and research systems.

This said I continue to be struck, as I travel to selective parts of the world for work, by the diversity of scalar emphases at play.

eiffeleu1In France, for example, the broad discourse about rankings elevates the importance of the national (i.e., French) and regional (i.e., European) scales, and only then does the university scale (which I will refer to as the institutional scale in this entry) come into play in importance terms. This situation reflects the strong role of the national state in governing and funding France’s higher education system, and France’s role in European development debates (including, at the moment, presidency of the Council of the European Union).

In UK it is the disciplinary/field and then the institutional scales that matter most, with the institutional made up of a long list of ranked disciplines/fields. Once the new Research Assessment Exercise (RAE) comes out in late 2008 we will see the institutional assess the position of each of their disciplines/fields, which will then lead to more support or relatively rapid allocation of the hatchet at the disciplinary/field level. This is in part because much national government funding (via the Higher Education Funding Council for England (HEFCE), the Scottish Funding Council (SFC), the Higher Education Funding Council for Wales (HEFCW) and the Department for Employment and Learning, Northern Ireland (DEL)) to each university is structurally dependent upon the relative rankings of each university’s position in the RAE, which is the aggregate effect of the position of the array of fields/disciplines in any one university (see this list from the University of Manchester for an example). The UK is, of course, concerned about its relative place in the two main global ranking schemes, but it doing well at the moment so the scale of concern is of a lower order than most other countries (including all other European countries). Credit rating agencies also assess and factor in rankings with respect to UK universities (e.g. see ‘Passing judgment’: the role of credit rating agencies in the global governance of UK universities‘).

In the US – supposedly the most marketized of contexts – there is highly variably concern with rankings.  Disciplines/fields ranked by media outlets like U.S. News & World Report are concerned, to be sure, but U.S. News & World Report does not allocate funding. Even the National Research Council (NRC) rankings matter less in the USA given that its effects (assuming it eventually comes out following multiple delays) are more diffuse. The NRC rankings are taken note of by deans and other senior administrators, and also faculty, albeit selectively. Again, there is no higher education system in the US – there are systems. I’ve worked in Singapore, England and the US as a faculty member and the US is by far the least addled or concerned by ranking systems, for good and for bad.

While the diversity of ranking dispositions at the national and institutional levels is heterogeneous in nature, the global rankings landscape is continuing to change, and quickly. In the remainder of this entry we’ll profile but two dimensions of the changes.

Anglo-American media networks and recurrent revenue

ustheFirst, new key media networks, largely Anglo-American private sector networks, have become intertwined.  As Inside Higher Ed put it on 24 November:

U.S. News & World Report on Friday announced a new, worldwide set of university rankings — which is really a repackaging of the international rankings produced this year in the Times Higher Education-QS World University Rankings. In some cases, U.S. News is arranging the rankings in different ways, but Robert Morse, director of rankings at the magazine, said that all data and the methodology were straight from the Times Higher’s rankings project, which is affiliated with the British publication about higher education. Asked if his magazine was just paying for reprint rights, Morse declined to discuss financial arrangements. But he said that it made sense for the magazine to look beyond the United States. “There is worldwide competition for the best faculty, best students and best research grants and researchers,” he said. He also said that, in the future, U.S. News may be involved in the methodology. Lloyd Thacker, founder of the Education Conservancy and a leading critic of U.S. News rankings, said of the magazine’s latest project: “The expansion of a business model that has profited at the expense of education is not surprising. This could challenge leaders to distinguish American higher education by providing better indicators of quality and by helping us think beyond ranking.”

This is an unexpected initiative, in some ways, given that the Times Higher Education-QS World University Rankings are already available on line and US New and World Report is simply repackaging these for sale in the American market. Yet if you adopt a market-making perspective this joint venture makes perfect sense. Annual versions of the Times Higher Education-QS World University Rankings will be reprinted in a familiar (to US readers) format, thereby enabling London-based TSL Education Ltd., London/Paris/Singapore-based QS Quacquarelli Symonds, and Washington DC-based U.S. News and World Report to generate recurring revenue with little new effort (apart from repackaging and distribution in the US). The enabling mechanism is, in this case, reprint rights fees. As we have noted before, this is a niche industry in formation, indeed.

More European angst and action

And second, at the regional level, European angst (an issue we profiled on 6 July in ‘Euro angsts, insights and actions regarding global university ranking schemes‘) about the nature and impact of rankings is leading to the production of critical reports on rankings methodologies, the sponsorship of high powered multi-stakeholder workshops, and the emergence of new proposals for European ranking schemes.

ecjrccoverSee, for example, this newly released report on rankings titled Higher Education Rankings: Robustness Issues and Critical Assessment, which is published by the European Commission Joint Research Centre, Institute for the Protection and Security of the Citizen, Centre for Research on Lifelong Learning (CRELL)

The press release is here, and a detailed abstract of the report is below:

The Academic Ranking of World Universities carried out annually by the Shanghai’s Jiao Tong University (mostly known as the ‘Shanghai ranking’) has become, beyond the intention of its developers, a reference for scholars and policy makers in the field of higher education. For example Aghion and co-workers at the Bruegel think tank use the index – together with other data collected by Bruegel researchers – for analysis of how to reform Europe’s universities, while French President Sarkozy has stressed the need for French universities to consolidate in order to promote their ranking under Jiao Tong. Given the political importance of this field the preparation of a new university ranking system is being considered by the French ministry of education.

The questions addressed in the present analysis is whether the Jiao Tong ranking serves the purposes it is used for, and whether its immediate European alternative, the British THES, can do better.

Robustness analysis of the Jiao Tong and THES ranking carried out by JRC researchers, and of an ad hoc created Jiao Tong-THES hybrid, shows that both measures fail when it comes to assessing Europe’s universities. Jiao Tong is only robust in the identification of the top performers, on either side of the Atlantic, but quite unreliable on the ordering of all other institutes. Furthermore Jiao Tong focuses only on the research performance of universities, and hence is based on the strong assumption that research is a universal proxy for education. THES is a step in the right direction in that it includes some measure of education quality, but is otherwise fragile in its ranking, undeniably biased towards British institutes and somehow inconsistent in the relation between subjective variables (from surveys) and objective data (e.g. citations).

JRC analysis is based on 88 universities for which both the THES and Jiao Tong rank were available. European universities covered by the present study thus constitute only about 0.5% of the population of Europe’s universities. Yet the fact that we are unable to reliably rank even the best European universities (apart from the 5 at the top) is a strong call for a better system, whose need is made acute by today’s policy focus on the reform of higher education. For most European students, teachers or researchers not even the Shanghai ranking – taken at face value and leaving aside the reservations raised in the present study – would tell which university is best in their own country. This is a problem for Europe, committed to make its education more comparable, its students more mobile and its researchers part of a European Research Area.

Various attempts in EU countries to address the issue of assessing higher education performance are briefly reviewed in the present study, which offers elements of analysis of which measurement problem could be addressed at the EU scale. [my emphasis]

While ostensibly “European”, does it really matter that the Times Higher Education-QS World University Ranking is produced by firms with European headquarters, while the Jiao Tong ranking is produced by an institution based in China?

The divergent logics underlying the production of discourses about rankings are also clearly visible in two related statements. At the bottom of the European Commission’s Joint Research Centre report summarized above we see “Reproduction is authorised provided the source is acknowledged”, while the Times Higher Education-QS World University Rankings, a market-making discourse, is accompanied by a lengthy copyright warning that can be viewed here.

Yet do not, for a minute, think that ‘Europe’ does not want to be ranked, or use rankings, as much if not more than any Asian or American or Australian institution. At a disciplinary/field level, for example, debates are quickly unfolding about the European Reference Index for the Humanities (ERIH), a European Science Foundation (ESF) backed initiative that has its origins in deliberations about the role of the humanities in the European Research Area. The ESF frames it this way:

Humanities research in Europe is multifaceted and rich in lively national, linguistic and intellectual traditions. Much of Europe’s Humanities scholarship is known to be first rate. However, there are specifities of Humanities research, that can make it difficult to assess and compare with other sciences. Also,  it is not possible to accurately apply to the Humanities assessment tools used to evaluate other types of research. As the transnational mobility of researchers continues to increase, so too does the transdisciplinarity of contemporary science. Humanities researchers must position themselves in changing international contexts and need a tool that offers benchmarking. This is why ERIH (European Reference Index for the Humanities) aims initially to identify, and gain more visibility for top-quality European Humanities research published in academic journals in, potentially, all European languages. It is a fully peer-reviewed, Europe-wide process, in which 15 expert panels sift and aggregate input received from funding agencies, subject associations and specialist research centres across the continent. In addition to being a reference index of the top journals in 15 areas of the Humanities, across the continent and beyond, it is intended that ERIH will be extended to include book-form publications and non-traditional formats. It is also intended that ERIH will form the backbone of a fully-fledged research information system for the Humanities.

See here for a defense of this ranking system by Michael Worton (Vice-Provost, University College London, and a member of the ERIH steering committee).  I was particularly struck by this comment:

However, the aim of the ERIH is not to assess the quality of individual outputs but to assess dissemination and impact. It can therefore provide something that the RAE cannot: it can be used for aggregate benchmarking of national research systems to determine the international standing of research carried out in a particular discipline in a particular country.

Link here for a Google weblog search on this debate, while a recent Chronicle of Higher Education article (‘New Ratings of Humanities Journals Do More Than Rank — They Rankle’) is also worth reviewing.

Thus we see a new rankings initiative emerging to enable (in theory) Europe to better codify its highly developed humanities presence on the global research landscape, but in a way that will enable national (at the intra-European scale) peaks (and presumably) valleys of quality output to be mapped for the humanities, but also for specific disciplines/fields. Imagine the governance opportunities available, at multiple scales, if this scheme is operationalized.

And finally, at the European scale again, University World News noted, on 23 November, that:

The European Union is planning to launch its own international higher education rankings, with emphasis on helping students make informed choices about where to study and encouraging their mobility. Odile Quintin, the European Commission’s Director-General of Education and Culture, announced she would call for proposals before the end of the year, with the first classification appearing in 2010.

A European classification would probably be compiled along the same lines as the German Centre for Higher Education Development Excellence Ranking.

European actors are being spurred into such action by multiple forces, some internal (including the perceived need to ‘modernize European universities in the context of Lisbon and the European Research Area), some external (Shanghai Jiao Tong; Times Higher QS), and some of a global dimension (e.g., audit culture; competition for mobile students).

eurankingsprogThis latest push is also due to the French presidency of the Council of the European Union, as noted above, which is facilitating action at the regional and national scales. See, for example, details on a Paris-based conference titled ‘International comparison of education systems: a european model?’ which was held on 13-14 November 2008. As noted in the programme, the:

objective of the conference is to bring to the fore the strengths and weaknesses of the different international and European education systems, while highlighting the need for regular and objective assessment of the reforms undertaken by European Member States by means of appropriate indicators. It will notably assist in taking stock of:
– the current state and performance of the different European education systems:
– the ability of the different European education systems to curb the rate of failure in schools,
– the relative effectiveness of amounts spent on education by the different Member States.

The programme and list of speakers is worth perusing to acquire a sense of the broad agenda being put forward.

Multi-scalar governance vs (?) recurring revenue: the emerging dual logics of the rankings phenomenon

The rankings phenomenon is here to stay. But which logics will prevail, or at least emerge as the most important in shaping the extension of audit culture into the spheres of higher education and research?  At the moment it appears that the two main logics are:

  • Creating a new niche industry to form markets and generate recurrent revenue; and,
  • Creating new multi-scalar governance technologies to open up previously opaque higher education and research systems, so as to facilitate strategic restructuring for the knowledge economy.

These dual logics are in some ways contradictory, yet in other ways they are interdependent. This is a phenomenon that also has deep roots in the emerging centres of global higher ed and research calculation that are situated in London, Shanghai, New York, Brussels, and Washington DC.  And it is underpinned by the analytical cum revenue generating technologies provided by the Scientific division of Thomson Reuters, which develops and operates the ISI Web of Knowledge.

Market-making and governance enabling…and all unfolding before our very eyes. Yet do we really know enough about the nature of the unfolding process, including the present and absent voices, that seems to be bringing these logics to the fore?

Kris Olds

‘University Systems Ranking (USR)’: an alternative ranking framework from EU think-tank

One of the hottest issues out there still continuing to attract world-wide attention is university rankings. The two highest profile ranking systems, of course, are the Shanghai Jiao Tong and the Times Higher rankings, both of which focus on what might constitute a world class university, and on the basis of that, who is ranked where. Rankings are also part of an emerging niche industry. All this of course generates a high level of institutional, national, and indeed supranational (if we count Europe in this) angst about who’s up, who’s down, and who’s managed to secure a holding position. And whilst everyone points to the flaws in these ranking systems, these two systems have nevertheless managed to capture the attention and imagination of the sector as a whole. In an earlier blog enty this year GlobalHigherEd mused over why European-level actors had not managed to produce an alternate system of university rankings which might counter the hegemony of the powerful Shanghai Jiao Tong (whose ranking system privileges the US universities) on the one hand, and act as a policy lever that Europe could pull to direct the emerging European higher education system, on the other.

Yesterday The Lisbon Council, an EU think-tank (see our entry here for a profile of this influential think-tank) released which might be considered a challenge to the Shanghai Jiao Tong and Times Higher ranking schemes – a University Systems Ranking (USR) in their report University Systems Ranking Citizens and Society in the Age of Knowledge. The difference between this ranking system and the Shanghai and Times is that it focuses on country-level data and change, and not  individual institutions.

The USR has been developed by the Human Capital Center at The Lisbon Council, Brussels (produced with support by the European Commission’s Education, Audiovisual and Culture Executive Agency) with advice from the OECD.

The report begins with the questions: why do we have university systems? What are these systems intended to do? And what do we expect them to deliver – to society, to individuals and to the world at large? The underlying message in the USR is that “a university system has a much broader mandate than producing hordes of Nobel laureates or cabals of tenure – and patent bearing professors” (p. 6).

So how is the USR different, and what might we make of this difference for the development of universities in the future? The USR is based on six criteria:

  1. Inclusiveness – number of students enrolled in the tertiary sector relative to the size of its population
  2. Access – ability of a country’s tertiary system to accept and help advance students with a low level of scholastic aptitude
  3. Effectiveness – ability of country’s education system to produce graduates with skills relevant to the country’s labour market (wage premia is the measure)
  4. Attractiveness – ability of a country’s system to attract a diverse range of foreign students (using the top 10 source countries)
  5. Age range – ability of a country’s tertiary system to function as a lifelong learning institution (share of 30-39 year olds enrolled)
  6. Responsiveness – ability of the system to reform and change – measured by speed and effectiveness with which Bologna Declaration accepted (15 of 17 countries surveyed have accepted the Bologna criteria.

These are then applied to 17 OECD countries (all but 2 signatories of the Bologna Process). A composite ranging is produced, as well as rankings on each of the criteria. So what were the outcomes for the higher education systems of these 17 countries?

Drawing upon all 6 criteria, a composite figure of USR is then produced. Australia is ranked 1st; the UK 2nd and Denmark 3rd, whilst Austria and Spain are ranked 16th and 17th respectively (see Table1 below). We can also see rankings based on specific criteria (Table 2 below).

thelisboncouncil1

thelisboncouncil2

There is much to be said for this intervention by The Lisbon Council – not the least being that it opens up debates about the role and purposes of universities. Over the past few months there have been numerous heated public interventions about this matter – from whether universities should be little more than giant patenting offices to whether they should be managers of social justice systems.

And though there are evident shortcomings (such as the lack of clarity about what might count as a university; the view that a university-based education is the most suitable form of education to produce a knowledge-based economy and society; what is the equity/access etc range within any one country, and so on), the USR does, at least, place issues like ‘lifelong learning’, ‘access’ and ‘inclusion’ on the reform agenda for universities across Europe. It also sends a message that it has a set of values that currently are not reflected in the two key ranking systems that it would like to advance.

However, the big question now is whether universities will see value in this kind of ranking system for its wider systemic, as opposed to institutional, possibilities, even if it is as a basis for discussing what are universities for and how might we produce more equitable knowledge societies and economies.

Susan Robertson and Roger Dale

Investing wisely for Australia’s future

Editor’s note: The following speech was given by Professor Ian Chubb, Vice-Chancellor of The Australian National University (ANU) on Wednesday 29 October 2008 at the National Press Club of Australia. It is reprinted in GlobalHigherEd with his kind permission.

~~~~~~~~~~~~~

Thank you Ken – for your welcome and introduction.

It has been some years since I last spoke at the National Press Club, and I appreciate the opportunity to do so again – particularly at the time when education reviews and research enquiries are being finalised and Government responses being prepared.

It is an important time; and there are opportunities not to be missed – and I plan to raise some of those with you today.

I suppose, before I start, I should make three things clear:

  1. I support the push for better funding for universities – accompanied by both reform and with selectively allocated additional funds for particular purposes based largely on the quality of the work we do – where we do it;
  2. I support the directions being pursued by the Government – and look forward to the outcomes of their various deliberations.
  3. I remind you that I am from ANU, that I work for ANU and that I serve ANU.  I like to think that through that role, however, I can also serve a bigger one – particular and important aspects of the national interest.

We at ANU do make a contribution to Australia and beyond.  For a start, we educate excellent students very well indeed; we rate in the top ‘A1’ band of the Federal Minister’s Teaching and Learning Performance Fund across our teaching profile – and have done so in the two years the category has been identified.   This was a surprise to some in the higher education sector, where we cherish the notion of a teaching-research nexus.  Notwithstanding the mantra, some had anticipated that better teaching would be done in the places where there was less research – more to spend on it perhaps, more of a focus, and so on.  It was presumed that the research-intensive universities would probably see teaching as a chore. But the research-intensive universities are places where staff and students alike are learners and where all of them are just downright curious to discover what they don’t know.  And at its best, they do it together.

At ANU we continue to work to improve what we do and how we do it. We set ourselves and our students stretch targets. We aim for high standards – not throughput. And we aim to give our students a head start in their life after University.

In research we do well.  We review what we do, we rate what we do, and we manage what we do in a way that some would call relentless, though few could argue is stifling.  So I am proud that the ANU continues to lead among Australian universities in the various world rankings that are, necessarily, based largely on research performance.

We are placed at 59 in the Shanghai Jiao Tong’s most recent listings and 16th on the list produced by the UK Times Higher Education Supplement, a position we have maintained now over three consecutive years.

I am proud because the rankings reflect well on the efforts of talented people. It is useful and it is reinforcing for that reason, and possibly more usefully it tells you about your neighbourhood when the world’s universities are rated along with you using a particular set of indicators.

I am not at all boastful, however, because we all know that such rankings are constructed arbitrarily around some of the available comparable measures year on year.  That they are called ‘prestigious’ or ‘highly regarded’ is largely because they are published each year, and because we have nothing else.  They are represented as ‘qualitative’ when in fact they are only partly about quality.

This is one reason why I support the Federal Government’s Excellence in Research for Australia (ERA) proposal, because, handled well, it will provide us with an efficacious and internationally benchmarked model to judge research quality in Australia.  Then we should have something truly useful to talk about.

But let me now talk about something usefully indicative that can be drawn from the Shanghai Jiao Tong (SJT) world university rankings: the neighbourhood.

When the rankings first came out five years ago, there were seven Chinese universities in the top 500.  This year there are eighteen.  It is quite possible that in five years’ time, given both the rate and the selectivity of their additional investment, there will be 10 or so Chinese universities in the top 200 and several in the top 100.  Australia may well struggle to keep our current three (ANU, Melbourne and Sydney) in that league.  Rankings are about relative performance and positions can change because of bigger changes in the performance of others and not because your own has slipped – or it could even be the way in which institutions present their data rather than any actual change in performance.  But the outcomes send signals that can be read.

Does this all matter?  Well I think it does, but it is not the only thing that matters.

When you look at the countries that have more than one university rated in the top 100 on the SJT ranking in 2007 you can see that they are countries with a high GDP per capita.

The United States stands out because of its scale.  The United Kingdom holds par when adjusted for population size.  Australia and Canada have been lifting above their weight, but Canada is now waxing while Australia is waning in every disciplinary field.  Asian universities are rising. European universities now realise that they are being left behind – but have started asking the right questions.

But history tells us that if you’re not able to pull your weight and to be a contributor you risk being locked out as a discipline, or as a university, or as a nation. If we don’t match investment and don’t match performance, Australia could be back to somewhere near where we were before 1946 – on the outside looking in.

In a world thirsty for talent and the benefits derived from the application of that talent, strategies are changing.  Many countries are ramping up their investments in their leading universities.  They are selectively concentrating additional funding for research, for research infrastructure, research centres of excellence and international research collaborations.  They are increasing the number of professors, and developing career opportunities for post-doctoral staff while enlarging doctoral student enrolments, including international PhD enrolments.

Take the example of Canada, which has set itself a goal of ranking amongst the top 4 countries in the world in terms of R&D performance across the government, business and higher education sectors. It set a target in 2002 of increasing the admission of Masters and PhD students at Canadian universities by 5% per year through to 2010. It is providing $300 million annually to support the salary costs of 2000 research professors in Canadian universities and hospitals, seeking, in their own words, ‘to unleash the full research potential’ of these institutions by attracting the best talent in the world – and they are doing so selectively. Close to two thirds of the Chairs have been allocated to the 13 most research intensive universities amongst their nearly 70 universities and roughly equal number of Colleges. Just last week the Canadian Finance Minister commented that they must build on  ‘our knowledge advantage’ and that: “This is a critical time in Canada in terms of making sure that in our public-policy decisions that we support universities and colleges.”

Germany.  Germany has invested heavily in research and innovation, particularly in the university sector, aiming, in their own words, “to establish internationally visible research beacons in Germany.” Their strategy includes spending up to 195 million Euros each year to establish internationally visible research and training Clusters of Excellence, based at universities but collaborating with non-university institutions. Closely tied to this is an effort to the tune of 210 million Euros each year to heighten the profile and strength of ten selected universities to be amongst world’s best in their areas of demonstrable excellence.

China.  China is the world’s fasted growing supporter of research and development with its national R&D funding now third highest in the world, just behind the United States and Japan. In 1998 China instituted the 985 Project, under which its ten leading universities were given special grants in excess of US$ 124 million over three years with the explicit aim of ensuring that China had universities represented amongst the world’s best. They have now enlarged the program to cover 36 universities amongst their hundreds.

Australia has still to move – and we have choices to make.  We see what is happening elsewhere: we see mostly additional funding concentrated and selectively allocated – not overtly, at least, at the expense of core funding; we see the benefits of granting universities adequate, strategic but accountable funding (like we once had); we see overt attempts to ‘internationalise’ by drawing staff and students from the global talent pool.  There is more…and so there are many lessons to be absorbed by us – an important one is to resist the temptation to spread additional resources – it would be close to a uniquely Australian approach.

And this in a world that won’t care unless we earn the right to be at their table; it is as true for our university leaders, our staff and our students, as it is for our political or our business leaders.   As I said earlier – if we are not at the table we would be back to something like the position we were just after the Second World War.

Our approach must be different from now.  We do need reform and we don’t need more tinkering. We don’t need more money tied up in small, so-called competitive programs that only partially fund what they purport to support and are not conducive to long-term planning.

I support a policy that will help universities be different from each other and to be outstanding at what they do.  I support policy-driven differentiation, not drift, and I support additional funding allocations above a better base related to both purpose and quality of work multiplied by the quantity of work.

I do not think that there is only one right way forward. And I would be happy to see us move on from an outdated ‘one-size fits all’ approach with side deals.  But not if it were replaced by the same sort of blunt and clumsy instrument.

But while there might not be one right way, I do know the wrong way: continuing the chronic partial funding of nearly everything we do.  In fact, we presently cope with a lot that is chronic. Chronic under-investment. Chronic tinkering rather than real reform. Chronic suspicion rather than trust. Chronic erosion of capital and infrastructure rather than provision of the best possible resources to enable our most talented to do their best work here. Chronic lack of support for students who are seen as a cost rather than a  means by which Australia invests in its future.  A chronic under-valuing of staff rather than recognising that the world call on talent means temptations are rife elsewhere. And a chronic under-valuing of PhD work and scholarships rather than using the peak to build capacity. The story rolls on.

For the universities of Australia to serve Australia’s interests, we need to be different from each other, respected for what we do, and be supported for what we do well over and above core support.  And we need the policy framework to make it happen, knowing that a consequence will be differential funding as different activities cost differently.

As a start we need to articulate what universities are for, what different purposes they may serve, and how.

In a recent essay, ‘What are universities for?‘, Boulton & Lucas (2008) suggest that the enduring role of universities is to create new opportunities and promote learning that enables deep thinking beyond the superficial, handles complexity and ambiguity, and shapes the future.  They argue that it is important not to be beguiled by prospects of immediate payoff from investment in university research and teaching.

I have a duty of care in my position to help build the University’s capacity to respond to such challenges in ways that are consistent with our envisaged role.  It is my responsibility, primarily, to ensure that the people with scholarly expertise in the University have the room and resources to excel in their research, the opportunity through teaching to share their first-hand insights with their students (I note that Paul Krugman, the most recent winner of the Nobel Prize for his research in Economics, said when he began his thanks: you have to start with your teachers), and the freedom to speak out when they see the need to do so, and to put their expertise into the public arena to help inform public opinion.

Let me indicate the ways by which universities can contribute, and then suggest some options for public policy.

I work from the premise that the ability of a university to deliver its mission depends crucially on public support, appropriate regulatory frameworks and adequate funding.  Without the requisite public trust and support universities cannot survive in today’s world.

Interestingly, the available evidence from surveys of community attitudes suggests that when it comes to major social, environmental and economic issues, the public and the Government look to universities for analysis, understanding and solutions.

Some of the current areas are well known: economic uncertainty, climate change, the threat of pandemics, sources of terrorism and the potential of alternative energy, just to name a few.

One of the ways ANU engages with the broader Australian community and seeks to understand what we Australians think is via ANUpoll.

The ANUpoll, led by Professor Ian McAllister in our College of Arts and Social Sciences, differs from other opinion polls by placing public opinion in a broad policy context, and by benchmarking Australian against international opinion. It can also reveal trends in opinions over many decades, drawing on the wide range of public opinion polls conducted at ANU since the 1960s.

It tells us interesting things about Australians. The first Poll, released in April this year, revealed that Australians, by international standards, are much more positively disposed to high levels of government expenditure, particularly on health, education, the environment and police. The Poll tells us that there is a greater level of trust in government in Australia relative to other nations.

The second poll, released in September, sought the views of Australians on higher education. It found that Australians are concerned about fair and equitable access to our universities; they view university as one important way of improving the job prospects of their children, but not the only avenue to success; and they believe that the decline in public funding for universities has gone too far.

And we know from the  ANUpoll released today that concern about climate change is a big issue for the community. Global warming is perceived as a major long-term threat to the health of the planet by a large proportion of the population.  But there is no simple solution to this problem. It is one that crosses the boundaries of science, social sciences, health, economics, law, philosophy and more. It is a big challenge; and Universities have a key role to play in meeting it.

It is no coincidence that the Australian Government and state and territory governments turned to a respected academic to investigate the impact of climate change on Australia, and to propose potential solutions.  Professor Ross Garnaut in turn drew upon the work of many of his colleagues at ANU and other universities, for the latest data, for research, thinking and ideas to respond to what he identified as a ‘diabolical problem.’

Although from one discipline, Economics, Professor Garnaut’s report reflects the reality that at the heart of the climate change challenge is the need for a deep comprehension of interlaced, inseparable elements in highly complex systems. Perhaps no challenge facing us demands such an interdisciplinary approach. It is a challenge that the community expects universities to help to meet, and one that universities must help meet.

ANU is seeking to respond to that challenge with the formation of the ANU Climate Change Institute under the leadership of Professor Will Steffen.  This initiative represents a substantial effort by the University community to harness expertise across disciplines to extend knowledge about climate change – its drivers, its implications, the scope for positive responses to its impact, and possible correctives to its trajectory.

It will develop interdisciplinary research projects on climate change through the application of the University’s core capabilities around critical questions and issues.

It will develop high quality education programs aimed at meeting the national and international demand for qualified practitioners.  From 2009 ANU will offer an interdisciplinary Masters in Climate Change offered jointly between the Fenner School of Environment and Society and the Crawford School of Economics and Government. We believe it is the first of its kind in Australia.

The Climate Change institute will also engage globally, co hosting the International Alliance of Research Universities Copenhagen Climate Change Congress March 2009, and engaging with the Intergovernmental Panel on Climate Change (IPCC), the World Climate Research Programme (WCRP), and the International Geosphere-Biosphere Programme (IGBP) among others.

ANU is seeking to respond to the expectations of the Australian community and government that the national university seek to find solutions to the complex problems that confront us.  The reality is that the world’s problems are inter-connected, and universities need organisational flexibility to respond creatively to the need for new knowledge in addressing them.

While the world faces the ticking time bomb of climate change, and universities here and around the world seek new ways to address such complex problems, another time bomb is ticking for universities – Australia’s changing demography.

The Group of Eight, has released today a backgrounder on the challenge of population change. It estimates that at least one third of the annual number of Australian PhD graduates will be needed each year on average over the next decade merely to replace retirements from the academic workforce.  Currently three quarters of doctoral graduates flow into non-academic occupations, so without additional output we would see either a slowdown of doctoral supply to the broader labour market – at a time when the country is seeking to increase the capacity of the private and public sectors to absorb new knowledge – or a shortfall in academic positions, and this is without factoring in any increase in the number of institutions to meet growth in future demand for tertiary education.

It was therefore pleasing to see the interim Report of the House of Representatives Standing Committee on Industry, Science and Innovation on Research Training in Australia.  The committee is convinced, as are we, that there is a strong case for reform – and importantly, recommendations with budget implications have bi-partisan support.

The problem is sharper for fields of research from which the pull of labour market demand is strongest – such as in engineering or geology.  We should not assume that we can meet domestic shortfall readily through immigration in the future without being prepared to pay the prices that the intensifying international competition for intellectual talent is beginning to demand.

The educational attainment of the Australian workforce is considerably below that of the world’s leaders.  Two-thirds of Australia’s workforce over 25 years of age have no post-secondary qualifications, and one third have achieved less than upper-secondary education.  Only 77% of females and 68% of males aged 19 have completed Year 12 or equivalent.

To bring Australia up to an educated workforce equivalent to the world’s leaders would involve an additional 1 million people between 25 and 44 years getting tertiary education qualifications.  To achieve that lift in the domestic skills base is challenging.  Not to do it leaves a challenge of a different kind.

Additionally, for young people aged 15 to 25, that objective would require a much higher rate of participation and would mean finding new ways of promoting access and success among potential learners who lack readiness.  For equity as well as productivity purposes it is necessary to close the achievement gap without lowering educational standards.

Taken together these rising and diversifying forms of demand for learning cannot be accommodated within the current structure of tertiary education.  Greater diversification and innovation will be needed, including new types of providers, public and private, offering flexible, multi-modal access to learning opportunities.

We should not assume this will happen naturally.  Indeed we can expect resistance to it.  New incentives will be needed to overcome structural and cultural conservatism. Another reason to move from the ‘one size fits all’ approach and rather than looking for a simple solution develop a policy framework that promotes and supports difference through design rather than drift.

Twenty years ago the Australian Government expanded higher education on a foundation of three pillars:

  • An injection of additional public investment for growth in student places and new campuses
  • The provision of income-contingent loans to enable students to make a co-contribution to the costs of higher education without up-front financial barriers
  • A redistribution of existing resources from established universities to new institutions, notably through a ‘clawback’ of research funding.

The legacy of that approach is the persistence of sameness in the funding rates for teaching, the thin spreading of funding, unvalidated claims about standards of qualifications and excellence, and a levelling down of performance peaks. It was a ‘one size fits all approach’ and it was called a unified national system. In my experience over now 23 years, it was not national, rarely unified and hardly a system.

Expansion encouraged all universities to adopt broadly similar aspirations.

We are not alone.  Boulton and Lucas made that clear to us when they discussed the European dilemma: how to have research powerhouses amongst the world’s best and provide higher education for a growing proportion of the population.  They point out that “…excessive convergence towards a single model of the basic research-focused university, with a lack of differentiated purpose, structure and mission…” has resulted in at least 980 (European) universities claiming to “aspire to achieve international excellence in research.”  In the same article, they point out that:  “The US has resolved this dilemma. Fewer than 250 universities award postgraduate degrees and fewer than 100 are recognised as research intensive, with the rest devoted to teaching and scholarship.” And remember that the U.S has thousands of post-secondary institutions.

The approach in Europe and Australia, including the funding approach, probably impeded some universities from identifying and investing in niches neglected by the established research universities.

Regardless of their varying circumstances, universities have tended to use the rhetoric of excellence, rather than ‘fitness for purpose’.  But ‘excellence’ is an empty notion when used without reference to validated performance standards.

The desire of institutions to move ‘up whichever ladder’ distracts higher education from its public purposes, skews missions, and alters institutional priorities and spending to drift beyond the limits of their capacity.

We see this very clearly in Australia where the gap between the Go8 universities and others in terms of research performance has been widening, not narrowing, despite the processes and funding of the last twenty years.

Clearly it is sub-optimal and counter-productive for the country to continue diluting the public investment in proven research capacity and performance.  We certainly cannot afford to apply this flawed thinking of the past to the future expansion and diversification of tertiary education.

An unfortunate effect of rankings like the Shanghai Jiao Tong measures that are based on internationally comparable data relating primarily to research output quality is that, in a highly competitive context, they reinforce traditional academic norms and encourage what Frans Van Vught has termed the ‘reputation race’.

He noted recently that:

The average quality of the current European higher education and research system is good but its excellence is limited.  A diversification of missions and of research portfolios and funding levels, would be necessary to allow the occurrence of more European top universities.

We could say the same about the fair to average overall quality of Australian higher education and research, while noting the view strongly held in European quarters and which resonates here, that student mobility is promoted through avoidance of stratification of universities.  It is seen to be anti-egalitarian to invest in excellence – at least in intellectual endeavours, for we don’t appear to have the same reluctance in sport. We invest additionally in the best athletes and national teams because in sport, we understand the concept of investing in excellence and that high achievement encourages the others.

Now we need a new approach to meet new challenges alongside longstanding needs to enlarge educational participation and strengthen capacity.

The three pillars on which the current system was expanded twenty years ago have become unstable. The Government share of funding has shrunk.  Market sources of finance are playing a greater role.

The problem with reliance on market forces in higher education is its tendency to reduce diversity in the system, and raise costs for students and taxpayers.  The market can be afraid of difficult or intellectually challenging ideas where the payoff isn’t easily predictable or easily apparent.

Clearly we can’t and don’t want to wind back the clock to a centrally-planned model of higher education.  Equally we cannot rely simply on the market.  A more flexible regulatory and financing approach is necessary, and we need to give form to the notion of mission-based funding compacts for each university that Labor proposed ahead of the 2007 election and has indicated subsequently its intention to progress in government.  I note that Deputy Prime Minister Gillard and Minister Carr have repeatedly declared their ambitions for the universities: structurally reformed, culturally reformed, socially inclusive and internationally competitive.  Hard to argue against – possible if not easy to achieve.

It is not enough to give universities what they ask for: more money, less regulation and more autonomy.  Or for universities to expect to be given what they ask for.  Much as we might be able to argue the compelling case for better generic funding, I can’t see that we stand a chance without conceding substantial reform and improvement.

To achieve what we need, we need not just Compacts but Compacts with teeth. We need Compacts that will hold us to hard decisions, validate and use evidence to agree and provide adequate support for our strengths and not simply endorse what we say about ourselves.

Compacts will fail to provide bold and different approaches if they are tied up in second-order metrics for shallow accountability reporting.

There must be some sharp incisors to bite through the surface of universities’ claims.  I suggest that the Government should complement negotiating teams of departmental officials with people with university experience (possibly international) who can exercise the discriminating judgements that will be necessary to validate the claims of universities against their missions.

The two main components of Compacts that may be on offer are the full funding of research and greater flexibility in financing of higher education.   I see these compacts working along the lines of the recent COAG reforms of Commonwealth-State specific purpose programs, to support additional performance-based actions on top of adequate (better) funding for core activities.  These would be significant reforms and I understand they need to be mutually beneficial for universities and the supporting community.

Hence, in return for full economic costs of research, I believe it is more than reasonable that universities should be able to demonstrate better knowledge of their costs, proper pricing, avoidance of internal cross subsidies, and improved management of their estates.

In return for improved funding, greater financing flexibility, and, for some universities, ‘deregulation’, I believe universities should be prepared to expand scholarships and bursaries for needy students, extend their outreach to raise aspirations and readiness of students from disadvantaged areas, and give greater attention to student success.

In a truly differentiated system it will be necessary to provide better support for students.  We must have a system that allows talented students, regardless of their life circumstances, to go to the university that best meets their ambitions and interests.  This will mean tackling the issue of income support, including rental assistance, if we are to develop a comprehensive strategy for improving the socio-economic mix of student enrolments in a markedly differentiated university system.

The participation rate of disadvantaged groups in higher education, notably students from low socio-economic backgrounds, Indigenous Australians, and Australians from regional and remote areas, remains low.

For many of these potential students and their parents, the additional education costs that cannot be deferred in the same way as HECS constitute an insurmountable burden – living expenses remain the major financial barrier to participation. Yet the system of student income support has not been reviewed by government since 1992.

I believe the Government is heading in the right direction with its three pillars of reform:

  • Expanding participation for the purposes of social inclusion and productivity improvement.
  • Focussing on internationally benchmarked quality as the key driver of investment in research and research training. An additional benefit of which might be to dispense with ‘perceptions’ and replace them with proven performance.
  • Increasing university flexibility and harmonising competitive processes with national public priorities.

I can simply enjoin the Government to stay on track, hold to the line and not get distracted by those who will seek a weaker course.  Even if there is to be a shortfall between the investment increases we need and the capacity of the economy to afford them for the time being, it is imperative that there is no compromise on the goals we set for ourselves and the standards we set for their achievement.

Anything less would sell Australia short.

Ian Chubb

New 2008 Shanghai rankings, by rankers who also certify rankers

Benchmarking, and audit culture more generally, are clearly the issues of the week. Following our coverage of a new Standard and Poor’s credit rating report regarding UK universities (‘Passing judgment’: the role of credit rating agencies in the global governance of UK universities‘), the Chronicle of Higher Education just noted that the 2008 Academic Ranking of World Universities (ARWU) (published by Shanghai Jiao Tong University) has been released on the web.

We’ve had more than a few stories about the pros and cons of rankings (e.g., 19 November’s  ‘University rankings: deliberations and future directions‘), but, of course, curiosity killed the cat so I eagerly plunged in for a quick scan.

Leaving aside the individual university scale, one of the most interesting representations of the data they collected, suspect though it might be, is this one:

The geographies, especially the disciplinary/field geographies, are noteworthy on a number of levels. The results are sure to propel the French (currently holding the rotating presidency of the Council of the European Union) into further action re., the deconstruction of the Shanghai methodology, and the development of alternatives (see my reference to this issue in the 6 July entry titled ‘Euro angsts, insights and actions regarding global university ranking schemes’).

I’m also not sure we can rely upon the recently established IREG-International Observatory on Academic Ranking and Excellence to shed unbiased light on the validity of the above table, and all the rest that are sure to be circulated, at the speed of light, through the global higher ed world over the next month or more. Why? Well, the IREG-International Observatory on Academic Ranking and Excellence, established on 18 April 2008, is supposed to:

review the conduct of “academic ranking” and expressions of “academic excellence” for the benefit of higher education, its stake-holders and the general public. This objective will be achieved by way of:

  • improving the standards, theory and practice in line with recommendations formulated in the Berlin Principles on Ranking of Higher Education Institutions;
  • initiating research and training related to ranking excellence;
  • analyzing the impact of ranking on access, recruitment trends and practices;
  • analyzing the role of ranking on institutional behavior;
  • enhancing public awareness and understanding of academic work.

Answering the explicit request of ranking bodies, the Observatory will review and assess selected rankings, based on methodological criteria and deontological standards of the Berlin Principles on Ranking of Higher Education Institutions. Successful ranking will be entities to declare they are “IREG Recognized”.

Now, who established the IREG-International Observatory on Academic Ranking and Excellence? A variety of ‘experts’ (photo below), including people associated with said Shanghai rankings, as well as U.S. News & World Report.

Forgive me if I am wrong, but is it not illogical, best intentions aside, to have rankers themselves on boards of institutions that seek to review “the conduct of ‘academic ranking’ and expressions of ‘academic excellence’ for the benefit of higher education, its stake-holders and the general public”, while also handing out IREG Recognized certifications (including to themselves, I presume)?

Kris Olds

Euro angsts, insights and actions regarding global university ranking schemes

The Beerkens’ blog noted, on 1 July, how the university rankings effect has even gone as far as reshaping immigration policy in the Netherlands. He included this extract, from a government policy proposal (‘Blueprint for a modern migration policy’):

Migrants are eligible if they received their degree from a university that is in the top 150 of two international league tables of universities. Because of the overlap, the lists consists of 189 universities…

Quite the authority being vetted in ranking schemes that are still in the process of being hotly debated!

On this broad topic, I’ve been traveling throughout Europe this academic year, pursuing a project not related to rankings, yet again and again rankings come up as a topic of discussion, reminding us of the de-facto global governance power of rankings (and the rankers). Ranking schemes, especially the Shanghai Jiao Tong University’s Academic Ranking of World Universities, and The Times Higher-QS World University Rankings are generating both governance impacts, and substantial anxiety, in multiple quarters.

In response, the European Commission is funding some research and thinking on the topic, while France’s new role in the rotating EU Presidency is supposed to lead to some further focus and attention over the next six months. More generally, here is a random list of European or Europe-based initiatives to examine the nature, impacts, and politics of global rankings:

And here are some recent or forthcoming events:

Yet I can’t help but wonder why Europe, which generally has high quality universities, despite some significant challenges, did not seek to shed light on the pros and cons of the rankings phenomenon any earlier. In other words, despite the critical mass of brainpower in Europe, what has hindered a collective, integrated, and well-funded interrogation of the ranking schemes from emerging before the ranking effects and path dependency started to take hold? Of course there was plenty of muttering, and some early research about rankings, and one could argue that I am viewing this topic through a rear view mirror, but Europe was, arguably, somewhat late in digging into this topic considering how much of an impact these assessment cum governance schemes are having.

So, if absence matters as much as presence in the global higher ed world, let’s ponder the absence of a serious European critique, or at least interrogation of, rankings and the rankers, until now. Let me put forward four possible explanations.

First, action at a European higher education scale has been focused upon bringing the European Higher Education Area to life via the Bologna Process, which was formally initiated in 1999. Thus there were only so many resources – intellectual and material – that could be allocated to higher education, so the Europeans are only now looking outwards to the power of rankings and the rankers. In short, key actors with a European higher education and research development vision have simply been too busy to focus on the rankings phenomenon and its effects.

A second explanation might be that European stakeholders are, deep down, profoundly uneasy about competition with respect to higher education, of which benchmarking and ranking is a part. But, as the Dublin Institute of Technology’s Ellen Hazelkorn notes in Australia’s Campus Review (27 May 2008):

Rankings are the latest weapon in the battle for world-class excellence. They are a manifestation of escalating global competition and the geopolitical search for talent, and are now a driver of that competition and a metaphor for the reputation race. What started out as an innocuous consumer product – aimed at undergraduate domestic students – has become a policy instrument, a management tool, and a transmitter of social, cultural and professional capital for the faculty and students who attend high-ranked institutions….

In the post-massification higher education world, rankings are widening the gap between elite and mass education, exacerbating the international division of knowledge. They inflate the academic arms race, locking institutions and governments into a continual quest for ever increasing resources which most countries cannot afford without sacrificing other social and economic policies. Should institutions and governments allow their higher education policy to be driven by metrics developed by others for another purpose?

It is worth noting that Ellen Hazelkorn is currently finishing an OECD-sponsored study on the effects of rankings.

In short, institutions associated with European higher education did not know how to assertively critique (or at least interrogate) ranking schemes as they never realized, until more recently, how ranking schemes are deeply geopolitical and geoeconomic vehicles that enable the powerful to maintain their standing, and harness yet even more resources inward. Angst regarding competition dulled senses to the intrinsically competitive logic of global university ranking schemes, and the political nature of their being.

Third, perhaps European elites, infatuated as they are with US Ivy League universities, or private institutions like Stanford, just accepted the schemes for the results summarized in this table from an OECD working paper (July 2007) written by Simon Marginson and Marijk van der Wende:

for they merely reinforced their acceptance of one form of American exceptionalism that has been acknowledged in Europe for some time. In other words, can one expect critiques of schemes that identify and peg, at the top, universities that many European elites would kill to send their children to, to emerge? I’m not so sure. As with Asia (where I worked from 1997-2001), and now in Europe, people seem infatuated with the standing of universities like Harvard, MIT, and Princeton, but these universities really operate in a parallel universe. Unless European governments, or the EU, are willing to establish 2-3 universities like King Abdullah University of Science and Technology (KAUST) in Saudi Arabia recently did with a $10 billion endowment, then angling to compete with the US privates should just be forgotten about. The new European Institute of Innovation and Technology (EIT) innovative as it may become, will not rearrange the rankings results, assuming they should indeed be rearranged.

Following what could be defined as a fait accompli phase, national and European political leaders came to progressively view the low status of European universities in the two key rankings schemes – Shanghai, and Times Higher – as a problematic situation. Why? The Lisbon Strategy emerges in 2000, was relaunched in 2005, and slowly starts to generate impacts, while also being continually retuned. Thus, if the strategy is to “become the most competitive and dynamic knowledge-based economy in the world, capable of sustainable economic growth with more and better jobs and greater social cohesion”, how can Europe become such a competitive global force when universities – key knowledge producers – are so far off fast emerging and now hegemonic global knowledge production maps?

In this political context, especially given state control over higher education budgets, and the relaunched Lisbon agenda drive, Europe’s rankers of ranking schemes were then propelled into action, in trebuchet-like fashion. 2010 is, after all, a key target date for a myriad of European scale assessments.

Fourth, Europe includes the UK, despite the feelings of many on both sides of the Channel. Powerful and well-respected institutions, with a wealth of analytical resources, are based in the UK, the global centre of calculation regarding bibliometrics (which rankings are a part of). Yet what role have universities like Oxford, Cambridge, Imperial College, UCL, and so on, or stakeholder organizations like Universities UK (UUK) and the Higher Education Funding Council for England (HEFCE), played in shedding light on the pros and cons of rankings for European institutions of higher education? I might be uninformed but the critiques are not emerging from the well placed, despite their immense experience with bibliometrics. In short as rankings aggregate data that works at a level of abstraction that hoves universities into view, and places UK universities highly (up there with Yale, Harvard and MIT), then these UK universities (or groups like UUK) will inevitably be concerned about their relative position, not the position of the broader regional system of which they are part, nor the rigour of the ranking methodologies. Interestingly, the vast majority of the above initiatives I listed only include representatives from universities that are ranked relatively low by the two main ranking schemes that now hold hegemonic power. I could also speculate on why the French contribution to the regional debate is limited, but will save that for another day.

These are but four of many possible explanations for why European higher education might have been relatively slow to grapple with the power and effects of university ranking schemes considering how much angst and impacts they generate. This said, you could argue, as Eric Beerkens has in the comments section below, that the European response was actually not late off the mark, despite what I argued above. The Shanghai rankings emerged in June 2003, and I still recall the attention they generated when they were first circulated. Three to five years for sustained action in some sectors is pretty quick, while in some sectors it is not.

In conclusion, it is clear that Europe has been destabilized by an immutable mobile – a regionally and now globally understood analytical device that holds together, travels across space, and is placed in reports, ministerial briefing notes, articles, PPT presentations, newspaper and magazine stories, etc. And it is only now that Europe is seriously interrogating the power of such devices, the data and methodologies that underly their production, and the global geopolitics and geoeconomics that they are part and parcel of.

I would argue that it is time to allocate substantial European resources to a deep, sustained, and ongoing analysis of the rankers, their ranking schemes, and associated effects. Questions remain, though, about how much light will be shed on the nature of university rankings schemes, what proposals or alternatives might emerge, and how the various currents of thought in Europe converge or diverge as some consensus is sought. Some institutions in Europe are actually happy that this ‘new reality’ has emerged for it is perceived to facilitate the ‘modernization’ of universities, enhance transparency at an intra-university scale, and elevate the role of the European Commission in European higher education development dynamics. Yet others equate rankings and classification schema with neoliberalism, commodification, and Americanization: this partly explains the ongoing critiques of the typology initiatives I linked to above, which are, to a degree, inspired by the German Excellence initiative, which is in turn partially inspired by a vision of what the US higher education system is.

Regardless, the rankings topic is not about to disappear. Let us hope that the controversies, debates, and research (current and future) inspire coordinated and rigorous European initiatives that will shed more light on this new form of defacto global governance. Why? If Europe does not do it, no one else will, at least in a manner that recognizes the diverse contributions that higher education can and should make to development processes at a range of scales.

Kris Olds

23 July update: see here for a review of a 2 juillet 2008 French Senate proposal to develop a new European ranking system that better reflects the nature of knowledge production (including language) in France and Europe more generally.  The full report (French only) can be downloaded here, while the press release (French only) can be read here.  France is, of course, going to publish a Senate report in French, though the likely target audience for the broader message (including a critique of the Shanghai Jiao Tong University’s Academic Ranking of World Universities) only partially understands French.  Yet in some ways it would have been better to have the report released simultaneously in both French and English.  But the contradictions of France critiquing dominant ranking schemes for their bias towards the English language, in English, was likely too much to take. In the end though, the French critique is well worth considering, and I can’t help but think that the EU or one of the many emerging initiatives above would be wise to have the report immediately translated and placed on some relevant websites so that it can be downloaded for review and debate.

Is the EU on target to meet the Lisbon objectives in education and training?

The European Commission (EC) has just released its annual 2007 Report Progress Towards the Lisbon Objectives in Education and Training: Indicators and Benchmarks. This 195 page document highlights the key messages about the main policy areas for the EC – from the rather controversial inclusion of schools (because of issues of subsidiarity) to what has become more standard fare for the EC – the vocational education and higher education sectors.

As we explain below, while the Report gives the thumbs up to the numbers of Maths, Science and Technology (MST) graduates, it gives the thumbs down to the quality of higher education. We, however, think that the benchmarks are far too simplistic and the conclusions drawn not sufficiently rigorous to support good policymaking. Let us explain.

The Report is the fourth in a series of annual assessments examining performance and progress toward the Education and Training 2010 Work Programme. These reports work as a disciplinary tool for Member States as well as contributing to making the EU more globally competitive.

To those of you unfamiliar with EC ‘speak’ – the EC’s Work Programme centers around the realization of 16 core indicators (agreed in May 2007 at the European Council and listed in the table below) and benchmarks (5) (also listed below) which emerged from the relaunch of the Lisbon Agenda in 2005.

lisbon-indicators.jpg

chart-1.jpg

Chapter 7 of this Report concentrates on progress toward modernizing higher education in Europe, though curiously enough there is no mention of the Bologna Process – the radical reorganization of the degree structure for European universities which has the US and Australia on the back-foot. Instead, three key areas are identified:

  • mathematics, science and technology graduates (MST)
  • mobility in higher education
  • quality of higher education institutions

With regard to MST, the EU is well on course to surpass the benchmark of an increase in the number of tertiary graduates in MST. However, the report notes that demographic trends (decreasing cohort size) will slow down growth in the long term.

chart-2.jpg

While laudable, GlobalHigherEd notes that it is not so much the number of graduates that are produced which is the problem. Rather, there are not enough attractive opportunities for researchers in Europe so that a significant percentage move to the US (14% of US graduates come from Europe). The long term attractiveness of Europe (see our recent entry) in terms of R&D is, therefore, still a major challenge.

With regard to mobility (see our earlier overview report), the EU has had an increase in the percentage of students with foreign citizenship. In 2004, every EU country, with the exception of Denmark, Estonia, Latvia, Lithuania, Hungary and Slovakia, recorded an increase in the % of students enrolled with foreign citizenship. Austria, Belgium, Germany, France, Cyprus and the UK have the highest proportions with foreign student populations of more than 10%.

Over the period 2000 to 2005 the number of students going to Europe from China increased by 500% (from 20,000 in 2000 to 107,000 in 2005; see our more detailed report on this), while numbers from India increased by 400%. While there is little doubt that the USA’s homeland security policy was a major factor, students also view the lower fees and moderate living costs in countries like France and Germany as particularly attractive. In the main:

  • the countries of origin of non-European students studying in the EU largely come from former colonies of the European member states
  • mobility is within the EU rather than from beyond the EU, with the exception of the UK. The UK is also a stand-out case because of the small number of its citizens who study in other EU countries.

Finally, concerning the quality of higher education, the Bologna Reforms are nowhere to be seen. Instead the EC report uses the Shanghai Jiao Tong Academic Ranking of World Universities (ARWI) and the World Universities Ranking (WUR) by the Times Higher Education Supplement to discuss the issue of quality. The Shanghai Jiao Tong uses Nobel Awards, and citations indexes (e.g. SCI; SSCI) – however, not only is a Nobel Award a limited (some say false) proxy for quality, but the citation indexes systematically discriminate in favor of US based institutions and journals. Only scientific output is included in each of these rankings; excluded are other kinds of outputs from universities which might have an impact, such as patents, or policy advice.

While each ranking system is intended to be a measure of quality – it is difficult to know what we might learn when one (Times Higher) will rank an institution (for example, the London School of Economics) in 11th position while the other (Shanghai) ranks the same institution in 200th position. Such vast differences could only be confusing for potential students if they were using them to make their choices about a high quality institution. However, perhaps this is not the main purpose, and that it serves a more important one – of ratcheting up both competition and discipline through comparison.

League tables are now also being developed in more nuanced ways. In 2007 the Shanghai ranking introduced one by ‘broad subject field’ (see below). What is particularly interesting here is that the EU-27 does relatively well in Engineering/Technology and Computer Sciences (ENG), Clinical Medicine and Pharmacy (MED) and Natural Sciences and Mathematics (SCI) in relation to the USA, compared with the Social Sciences (where the USA outflanks it by a considerable degree). Are Social Sciences in Europe this poor in terms of quality, and hence in serious trouble? GlobalHigherEd suggests that these differences are likely a reflection of the more internationalized/Anglocized publishing practices of the science, technology and medical fields, in comparison to the social sciences, who are committed in many cases to publishing in national languages.

lisbon-subject-areas.jpg

The somewhat dubious nature of these rankings as indicators of quality does not stop the EC using them to show that of the top 100 universities, 54 are located in the USA and only 29 in Europe. And again, the overall project of the EC is to set the agenda at the European scale for Member States by putting into place at the European level a set of instruments–including the recently launched European Research Council–intended to help retain MST graduates as well as recruit the brightest talent from around the globe (particularly China and India) and keep them in Europe.

However, the MST capacity of the EU outruns its industry’s ability to absorb and retain the graduates. It is clear the markets for students and brains are developing in different ways in different countries but with clear ‘types’ of markets and consumers emerging. The question is: what would an EU ranking system achieve as a technology of competitive market making?

Susan Robertson and Peter Jones