Multi-scalar governance technologies vs recurring revenue: the dual logics of the rankings phenomenon

Our most recent entry (‘University Systems Ranking (USR)’: an alternative ranking framework from EU think-tank‘) is getting heavy traffic these days, a sign that the rankings phenomenon just won’t go away.  Indeed there is every sign that debates about rankings will be heating up over the next 1-2 year in particular, courtesy of the desire of stakeholders to better understand rankings, generate ‘recurring revenue’ off of rankings, and provide new governance technologies to restructure higher education and research systems.

This said I continue to be struck, as I travel to selective parts of the world for work, by the diversity of scalar emphases at play.

eiffeleu1In France, for example, the broad discourse about rankings elevates the importance of the national (i.e., French) and regional (i.e., European) scales, and only then does the university scale (which I will refer to as the institutional scale in this entry) come into play in importance terms. This situation reflects the strong role of the national state in governing and funding France’s higher education system, and France’s role in European development debates (including, at the moment, presidency of the Council of the European Union).

In UK it is the disciplinary/field and then the institutional scales that matter most, with the institutional made up of a long list of ranked disciplines/fields. Once the new Research Assessment Exercise (RAE) comes out in late 2008 we will see the institutional assess the position of each of their disciplines/fields, which will then lead to more support or relatively rapid allocation of the hatchet at the disciplinary/field level. This is in part because much national government funding (via the Higher Education Funding Council for England (HEFCE), the Scottish Funding Council (SFC), the Higher Education Funding Council for Wales (HEFCW) and the Department for Employment and Learning, Northern Ireland (DEL)) to each university is structurally dependent upon the relative rankings of each university’s position in the RAE, which is the aggregate effect of the position of the array of fields/disciplines in any one university (see this list from the University of Manchester for an example). The UK is, of course, concerned about its relative place in the two main global ranking schemes, but it doing well at the moment so the scale of concern is of a lower order than most other countries (including all other European countries). Credit rating agencies also assess and factor in rankings with respect to UK universities (e.g. see ‘Passing judgment’: the role of credit rating agencies in the global governance of UK universities‘).

In the US – supposedly the most marketized of contexts – there is highly variably concern with rankings.  Disciplines/fields ranked by media outlets like U.S. News & World Report are concerned, to be sure, but U.S. News & World Report does not allocate funding. Even the National Research Council (NRC) rankings matter less in the USA given that its effects (assuming it eventually comes out following multiple delays) are more diffuse. The NRC rankings are taken note of by deans and other senior administrators, and also faculty, albeit selectively. Again, there is no higher education system in the US – there are systems. I’ve worked in Singapore, England and the US as a faculty member and the US is by far the least addled or concerned by ranking systems, for good and for bad.

While the diversity of ranking dispositions at the national and institutional levels is heterogeneous in nature, the global rankings landscape is continuing to change, and quickly. In the remainder of this entry we’ll profile but two dimensions of the changes.

Anglo-American media networks and recurrent revenue

ustheFirst, new key media networks, largely Anglo-American private sector networks, have become intertwined.  As Inside Higher Ed put it on 24 November:

U.S. News & World Report on Friday announced a new, worldwide set of university rankings — which is really a repackaging of the international rankings produced this year in the Times Higher Education-QS World University Rankings. In some cases, U.S. News is arranging the rankings in different ways, but Robert Morse, director of rankings at the magazine, said that all data and the methodology were straight from the Times Higher’s rankings project, which is affiliated with the British publication about higher education. Asked if his magazine was just paying for reprint rights, Morse declined to discuss financial arrangements. But he said that it made sense for the magazine to look beyond the United States. “There is worldwide competition for the best faculty, best students and best research grants and researchers,” he said. He also said that, in the future, U.S. News may be involved in the methodology. Lloyd Thacker, founder of the Education Conservancy and a leading critic of U.S. News rankings, said of the magazine’s latest project: “The expansion of a business model that has profited at the expense of education is not surprising. This could challenge leaders to distinguish American higher education by providing better indicators of quality and by helping us think beyond ranking.”

This is an unexpected initiative, in some ways, given that the Times Higher Education-QS World University Rankings are already available on line and US New and World Report is simply repackaging these for sale in the American market. Yet if you adopt a market-making perspective this joint venture makes perfect sense. Annual versions of the Times Higher Education-QS World University Rankings will be reprinted in a familiar (to US readers) format, thereby enabling London-based TSL Education Ltd., London/Paris/Singapore-based QS Quacquarelli Symonds, and Washington DC-based U.S. News and World Report to generate recurring revenue with little new effort (apart from repackaging and distribution in the US). The enabling mechanism is, in this case, reprint rights fees. As we have noted before, this is a niche industry in formation, indeed.

More European angst and action

And second, at the regional level, European angst (an issue we profiled on 6 July in ‘Euro angsts, insights and actions regarding global university ranking schemes‘) about the nature and impact of rankings is leading to the production of critical reports on rankings methodologies, the sponsorship of high powered multi-stakeholder workshops, and the emergence of new proposals for European ranking schemes.

ecjrccoverSee, for example, this newly released report on rankings titled Higher Education Rankings: Robustness Issues and Critical Assessment, which is published by the European Commission Joint Research Centre, Institute for the Protection and Security of the Citizen, Centre for Research on Lifelong Learning (CRELL)

The press release is here, and a detailed abstract of the report is below:

The Academic Ranking of World Universities carried out annually by the Shanghai’s Jiao Tong University (mostly known as the ‘Shanghai ranking’) has become, beyond the intention of its developers, a reference for scholars and policy makers in the field of higher education. For example Aghion and co-workers at the Bruegel think tank use the index – together with other data collected by Bruegel researchers – for analysis of how to reform Europe’s universities, while French President Sarkozy has stressed the need for French universities to consolidate in order to promote their ranking under Jiao Tong. Given the political importance of this field the preparation of a new university ranking system is being considered by the French ministry of education.

The questions addressed in the present analysis is whether the Jiao Tong ranking serves the purposes it is used for, and whether its immediate European alternative, the British THES, can do better.

Robustness analysis of the Jiao Tong and THES ranking carried out by JRC researchers, and of an ad hoc created Jiao Tong-THES hybrid, shows that both measures fail when it comes to assessing Europe’s universities. Jiao Tong is only robust in the identification of the top performers, on either side of the Atlantic, but quite unreliable on the ordering of all other institutes. Furthermore Jiao Tong focuses only on the research performance of universities, and hence is based on the strong assumption that research is a universal proxy for education. THES is a step in the right direction in that it includes some measure of education quality, but is otherwise fragile in its ranking, undeniably biased towards British institutes and somehow inconsistent in the relation between subjective variables (from surveys) and objective data (e.g. citations).

JRC analysis is based on 88 universities for which both the THES and Jiao Tong rank were available. European universities covered by the present study thus constitute only about 0.5% of the population of Europe’s universities. Yet the fact that we are unable to reliably rank even the best European universities (apart from the 5 at the top) is a strong call for a better system, whose need is made acute by today’s policy focus on the reform of higher education. For most European students, teachers or researchers not even the Shanghai ranking – taken at face value and leaving aside the reservations raised in the present study – would tell which university is best in their own country. This is a problem for Europe, committed to make its education more comparable, its students more mobile and its researchers part of a European Research Area.

Various attempts in EU countries to address the issue of assessing higher education performance are briefly reviewed in the present study, which offers elements of analysis of which measurement problem could be addressed at the EU scale. [my emphasis]

While ostensibly “European”, does it really matter that the Times Higher Education-QS World University Ranking is produced by firms with European headquarters, while the Jiao Tong ranking is produced by an institution based in China?

The divergent logics underlying the production of discourses about rankings are also clearly visible in two related statements. At the bottom of the European Commission’s Joint Research Centre report summarized above we see “Reproduction is authorised provided the source is acknowledged”, while the Times Higher Education-QS World University Rankings, a market-making discourse, is accompanied by a lengthy copyright warning that can be viewed here.

Yet do not, for a minute, think that ‘Europe’ does not want to be ranked, or use rankings, as much if not more than any Asian or American or Australian institution. At a disciplinary/field level, for example, debates are quickly unfolding about the European Reference Index for the Humanities (ERIH), a European Science Foundation (ESF) backed initiative that has its origins in deliberations about the role of the humanities in the European Research Area. The ESF frames it this way:

Humanities research in Europe is multifaceted and rich in lively national, linguistic and intellectual traditions. Much of Europe’s Humanities scholarship is known to be first rate. However, there are specifities of Humanities research, that can make it difficult to assess and compare with other sciences. Also,  it is not possible to accurately apply to the Humanities assessment tools used to evaluate other types of research. As the transnational mobility of researchers continues to increase, so too does the transdisciplinarity of contemporary science. Humanities researchers must position themselves in changing international contexts and need a tool that offers benchmarking. This is why ERIH (European Reference Index for the Humanities) aims initially to identify, and gain more visibility for top-quality European Humanities research published in academic journals in, potentially, all European languages. It is a fully peer-reviewed, Europe-wide process, in which 15 expert panels sift and aggregate input received from funding agencies, subject associations and specialist research centres across the continent. In addition to being a reference index of the top journals in 15 areas of the Humanities, across the continent and beyond, it is intended that ERIH will be extended to include book-form publications and non-traditional formats. It is also intended that ERIH will form the backbone of a fully-fledged research information system for the Humanities.

See here for a defense of this ranking system by Michael Worton (Vice-Provost, University College London, and a member of the ERIH steering committee).  I was particularly struck by this comment:

However, the aim of the ERIH is not to assess the quality of individual outputs but to assess dissemination and impact. It can therefore provide something that the RAE cannot: it can be used for aggregate benchmarking of national research systems to determine the international standing of research carried out in a particular discipline in a particular country.

Link here for a Google weblog search on this debate, while a recent Chronicle of Higher Education article (‘New Ratings of Humanities Journals Do More Than Rank — They Rankle’) is also worth reviewing.

Thus we see a new rankings initiative emerging to enable (in theory) Europe to better codify its highly developed humanities presence on the global research landscape, but in a way that will enable national (at the intra-European scale) peaks (and presumably) valleys of quality output to be mapped for the humanities, but also for specific disciplines/fields. Imagine the governance opportunities available, at multiple scales, if this scheme is operationalized.

And finally, at the European scale again, University World News noted, on 23 November, that:

The European Union is planning to launch its own international higher education rankings, with emphasis on helping students make informed choices about where to study and encouraging their mobility. Odile Quintin, the European Commission’s Director-General of Education and Culture, announced she would call for proposals before the end of the year, with the first classification appearing in 2010.

A European classification would probably be compiled along the same lines as the German Centre for Higher Education Development Excellence Ranking.

European actors are being spurred into such action by multiple forces, some internal (including the perceived need to ‘modernize European universities in the context of Lisbon and the European Research Area), some external (Shanghai Jiao Tong; Times Higher QS), and some of a global dimension (e.g., audit culture; competition for mobile students).

eurankingsprogThis latest push is also due to the French presidency of the Council of the European Union, as noted above, which is facilitating action at the regional and national scales. See, for example, details on a Paris-based conference titled ‘International comparison of education systems: a european model?’ which was held on 13-14 November 2008. As noted in the programme, the:

objective of the conference is to bring to the fore the strengths and weaknesses of the different international and European education systems, while highlighting the need for regular and objective assessment of the reforms undertaken by European Member States by means of appropriate indicators. It will notably assist in taking stock of:
– the current state and performance of the different European education systems:
– the ability of the different European education systems to curb the rate of failure in schools,
– the relative effectiveness of amounts spent on education by the different Member States.

The programme and list of speakers is worth perusing to acquire a sense of the broad agenda being put forward.

Multi-scalar governance vs (?) recurring revenue: the emerging dual logics of the rankings phenomenon

The rankings phenomenon is here to stay. But which logics will prevail, or at least emerge as the most important in shaping the extension of audit culture into the spheres of higher education and research?  At the moment it appears that the two main logics are:

  • Creating a new niche industry to form markets and generate recurrent revenue; and,
  • Creating new multi-scalar governance technologies to open up previously opaque higher education and research systems, so as to facilitate strategic restructuring for the knowledge economy.

These dual logics are in some ways contradictory, yet in other ways they are interdependent. This is a phenomenon that also has deep roots in the emerging centres of global higher ed and research calculation that are situated in London, Shanghai, New York, Brussels, and Washington DC.  And it is underpinned by the analytical cum revenue generating technologies provided by the Scientific division of Thomson Reuters, which develops and operates the ISI Web of Knowledge.

Market-making and governance enabling…and all unfolding before our very eyes. Yet do we really know enough about the nature of the unfolding process, including the present and absent voices, that seems to be bringing these logics to the fore?

Kris Olds

‘University Systems Ranking (USR)’: an alternative ranking framework from EU think-tank

One of the hottest issues out there still continuing to attract world-wide attention is university rankings. The two highest profile ranking systems, of course, are the Shanghai Jiao Tong and the Times Higher rankings, both of which focus on what might constitute a world class university, and on the basis of that, who is ranked where. Rankings are also part of an emerging niche industry. All this of course generates a high level of institutional, national, and indeed supranational (if we count Europe in this) angst about who’s up, who’s down, and who’s managed to secure a holding position. And whilst everyone points to the flaws in these ranking systems, these two systems have nevertheless managed to capture the attention and imagination of the sector as a whole. In an earlier blog enty this year GlobalHigherEd mused over why European-level actors had not managed to produce an alternate system of university rankings which might counter the hegemony of the powerful Shanghai Jiao Tong (whose ranking system privileges the US universities) on the one hand, and act as a policy lever that Europe could pull to direct the emerging European higher education system, on the other.

Yesterday The Lisbon Council, an EU think-tank (see our entry here for a profile of this influential think-tank) released which might be considered a challenge to the Shanghai Jiao Tong and Times Higher ranking schemes – a University Systems Ranking (USR) in their report University Systems Ranking Citizens and Society in the Age of Knowledge. The difference between this ranking system and the Shanghai and Times is that it focuses on country-level data and change, and not  individual institutions.

The USR has been developed by the Human Capital Center at The Lisbon Council, Brussels (produced with support by the European Commission’s Education, Audiovisual and Culture Executive Agency) with advice from the OECD.

The report begins with the questions: why do we have university systems? What are these systems intended to do? And what do we expect them to deliver – to society, to individuals and to the world at large? The underlying message in the USR is that “a university system has a much broader mandate than producing hordes of Nobel laureates or cabals of tenure – and patent bearing professors” (p. 6).

So how is the USR different, and what might we make of this difference for the development of universities in the future? The USR is based on six criteria:

  1. Inclusiveness – number of students enrolled in the tertiary sector relative to the size of its population
  2. Access – ability of a country’s tertiary system to accept and help advance students with a low level of scholastic aptitude
  3. Effectiveness – ability of country’s education system to produce graduates with skills relevant to the country’s labour market (wage premia is the measure)
  4. Attractiveness – ability of a country’s system to attract a diverse range of foreign students (using the top 10 source countries)
  5. Age range – ability of a country’s tertiary system to function as a lifelong learning institution (share of 30-39 year olds enrolled)
  6. Responsiveness – ability of the system to reform and change – measured by speed and effectiveness with which Bologna Declaration accepted (15 of 17 countries surveyed have accepted the Bologna criteria.

These are then applied to 17 OECD countries (all but 2 signatories of the Bologna Process). A composite ranging is produced, as well as rankings on each of the criteria. So what were the outcomes for the higher education systems of these 17 countries?

Drawing upon all 6 criteria, a composite figure of USR is then produced. Australia is ranked 1st; the UK 2nd and Denmark 3rd, whilst Austria and Spain are ranked 16th and 17th respectively (see Table1 below). We can also see rankings based on specific criteria (Table 2 below).

thelisboncouncil1

thelisboncouncil2

There is much to be said for this intervention by The Lisbon Council – not the least being that it opens up debates about the role and purposes of universities. Over the past few months there have been numerous heated public interventions about this matter – from whether universities should be little more than giant patenting offices to whether they should be managers of social justice systems.

And though there are evident shortcomings (such as the lack of clarity about what might count as a university; the view that a university-based education is the most suitable form of education to produce a knowledge-based economy and society; what is the equity/access etc range within any one country, and so on), the USR does, at least, place issues like ‘lifelong learning’, ‘access’ and ‘inclusion’ on the reform agenda for universities across Europe. It also sends a message that it has a set of values that currently are not reflected in the two key ranking systems that it would like to advance.

However, the big question now is whether universities will see value in this kind of ranking system for its wider systemic, as opposed to institutional, possibilities, even if it is as a basis for discussing what are universities for and how might we produce more equitable knowledge societies and economies.

Susan Robertson and Roger Dale

Investing wisely for Australia’s future

Editor’s note: The following speech was given by Professor Ian Chubb, Vice-Chancellor of The Australian National University (ANU) on Wednesday 29 October 2008 at the National Press Club of Australia. It is reprinted in GlobalHigherEd with his kind permission.

~~~~~~~~~~~~~

Thank you Ken – for your welcome and introduction.

It has been some years since I last spoke at the National Press Club, and I appreciate the opportunity to do so again – particularly at the time when education reviews and research enquiries are being finalised and Government responses being prepared.

It is an important time; and there are opportunities not to be missed – and I plan to raise some of those with you today.

I suppose, before I start, I should make three things clear:

  1. I support the push for better funding for universities – accompanied by both reform and with selectively allocated additional funds for particular purposes based largely on the quality of the work we do – where we do it;
  2. I support the directions being pursued by the Government – and look forward to the outcomes of their various deliberations.
  3. I remind you that I am from ANU, that I work for ANU and that I serve ANU.  I like to think that through that role, however, I can also serve a bigger one – particular and important aspects of the national interest.

We at ANU do make a contribution to Australia and beyond.  For a start, we educate excellent students very well indeed; we rate in the top ‘A1’ band of the Federal Minister’s Teaching and Learning Performance Fund across our teaching profile – and have done so in the two years the category has been identified.   This was a surprise to some in the higher education sector, where we cherish the notion of a teaching-research nexus.  Notwithstanding the mantra, some had anticipated that better teaching would be done in the places where there was less research – more to spend on it perhaps, more of a focus, and so on.  It was presumed that the research-intensive universities would probably see teaching as a chore. But the research-intensive universities are places where staff and students alike are learners and where all of them are just downright curious to discover what they don’t know.  And at its best, they do it together.

At ANU we continue to work to improve what we do and how we do it. We set ourselves and our students stretch targets. We aim for high standards – not throughput. And we aim to give our students a head start in their life after University.

In research we do well.  We review what we do, we rate what we do, and we manage what we do in a way that some would call relentless, though few could argue is stifling.  So I am proud that the ANU continues to lead among Australian universities in the various world rankings that are, necessarily, based largely on research performance.

We are placed at 59 in the Shanghai Jiao Tong’s most recent listings and 16th on the list produced by the UK Times Higher Education Supplement, a position we have maintained now over three consecutive years.

I am proud because the rankings reflect well on the efforts of talented people. It is useful and it is reinforcing for that reason, and possibly more usefully it tells you about your neighbourhood when the world’s universities are rated along with you using a particular set of indicators.

I am not at all boastful, however, because we all know that such rankings are constructed arbitrarily around some of the available comparable measures year on year.  That they are called ‘prestigious’ or ‘highly regarded’ is largely because they are published each year, and because we have nothing else.  They are represented as ‘qualitative’ when in fact they are only partly about quality.

This is one reason why I support the Federal Government’s Excellence in Research for Australia (ERA) proposal, because, handled well, it will provide us with an efficacious and internationally benchmarked model to judge research quality in Australia.  Then we should have something truly useful to talk about.

But let me now talk about something usefully indicative that can be drawn from the Shanghai Jiao Tong (SJT) world university rankings: the neighbourhood.

When the rankings first came out five years ago, there were seven Chinese universities in the top 500.  This year there are eighteen.  It is quite possible that in five years’ time, given both the rate and the selectivity of their additional investment, there will be 10 or so Chinese universities in the top 200 and several in the top 100.  Australia may well struggle to keep our current three (ANU, Melbourne and Sydney) in that league.  Rankings are about relative performance and positions can change because of bigger changes in the performance of others and not because your own has slipped – or it could even be the way in which institutions present their data rather than any actual change in performance.  But the outcomes send signals that can be read.

Does this all matter?  Well I think it does, but it is not the only thing that matters.

When you look at the countries that have more than one university rated in the top 100 on the SJT ranking in 2007 you can see that they are countries with a high GDP per capita.

The United States stands out because of its scale.  The United Kingdom holds par when adjusted for population size.  Australia and Canada have been lifting above their weight, but Canada is now waxing while Australia is waning in every disciplinary field.  Asian universities are rising. European universities now realise that they are being left behind – but have started asking the right questions.

But history tells us that if you’re not able to pull your weight and to be a contributor you risk being locked out as a discipline, or as a university, or as a nation. If we don’t match investment and don’t match performance, Australia could be back to somewhere near where we were before 1946 – on the outside looking in.

In a world thirsty for talent and the benefits derived from the application of that talent, strategies are changing.  Many countries are ramping up their investments in their leading universities.  They are selectively concentrating additional funding for research, for research infrastructure, research centres of excellence and international research collaborations.  They are increasing the number of professors, and developing career opportunities for post-doctoral staff while enlarging doctoral student enrolments, including international PhD enrolments.

Take the example of Canada, which has set itself a goal of ranking amongst the top 4 countries in the world in terms of R&D performance across the government, business and higher education sectors. It set a target in 2002 of increasing the admission of Masters and PhD students at Canadian universities by 5% per year through to 2010. It is providing $300 million annually to support the salary costs of 2000 research professors in Canadian universities and hospitals, seeking, in their own words, ‘to unleash the full research potential’ of these institutions by attracting the best talent in the world – and they are doing so selectively. Close to two thirds of the Chairs have been allocated to the 13 most research intensive universities amongst their nearly 70 universities and roughly equal number of Colleges. Just last week the Canadian Finance Minister commented that they must build on  ‘our knowledge advantage’ and that: “This is a critical time in Canada in terms of making sure that in our public-policy decisions that we support universities and colleges.”

Germany.  Germany has invested heavily in research and innovation, particularly in the university sector, aiming, in their own words, “to establish internationally visible research beacons in Germany.” Their strategy includes spending up to 195 million Euros each year to establish internationally visible research and training Clusters of Excellence, based at universities but collaborating with non-university institutions. Closely tied to this is an effort to the tune of 210 million Euros each year to heighten the profile and strength of ten selected universities to be amongst world’s best in their areas of demonstrable excellence.

China.  China is the world’s fasted growing supporter of research and development with its national R&D funding now third highest in the world, just behind the United States and Japan. In 1998 China instituted the 985 Project, under which its ten leading universities were given special grants in excess of US$ 124 million over three years with the explicit aim of ensuring that China had universities represented amongst the world’s best. They have now enlarged the program to cover 36 universities amongst their hundreds.

Australia has still to move – and we have choices to make.  We see what is happening elsewhere: we see mostly additional funding concentrated and selectively allocated – not overtly, at least, at the expense of core funding; we see the benefits of granting universities adequate, strategic but accountable funding (like we once had); we see overt attempts to ‘internationalise’ by drawing staff and students from the global talent pool.  There is more…and so there are many lessons to be absorbed by us – an important one is to resist the temptation to spread additional resources – it would be close to a uniquely Australian approach.

And this in a world that won’t care unless we earn the right to be at their table; it is as true for our university leaders, our staff and our students, as it is for our political or our business leaders.   As I said earlier – if we are not at the table we would be back to something like the position we were just after the Second World War.

Our approach must be different from now.  We do need reform and we don’t need more tinkering. We don’t need more money tied up in small, so-called competitive programs that only partially fund what they purport to support and are not conducive to long-term planning.

I support a policy that will help universities be different from each other and to be outstanding at what they do.  I support policy-driven differentiation, not drift, and I support additional funding allocations above a better base related to both purpose and quality of work multiplied by the quantity of work.

I do not think that there is only one right way forward. And I would be happy to see us move on from an outdated ‘one-size fits all’ approach with side deals.  But not if it were replaced by the same sort of blunt and clumsy instrument.

But while there might not be one right way, I do know the wrong way: continuing the chronic partial funding of nearly everything we do.  In fact, we presently cope with a lot that is chronic. Chronic under-investment. Chronic tinkering rather than real reform. Chronic suspicion rather than trust. Chronic erosion of capital and infrastructure rather than provision of the best possible resources to enable our most talented to do their best work here. Chronic lack of support for students who are seen as a cost rather than a  means by which Australia invests in its future.  A chronic under-valuing of staff rather than recognising that the world call on talent means temptations are rife elsewhere. And a chronic under-valuing of PhD work and scholarships rather than using the peak to build capacity. The story rolls on.

For the universities of Australia to serve Australia’s interests, we need to be different from each other, respected for what we do, and be supported for what we do well over and above core support.  And we need the policy framework to make it happen, knowing that a consequence will be differential funding as different activities cost differently.

As a start we need to articulate what universities are for, what different purposes they may serve, and how.

In a recent essay, ‘What are universities for?‘, Boulton & Lucas (2008) suggest that the enduring role of universities is to create new opportunities and promote learning that enables deep thinking beyond the superficial, handles complexity and ambiguity, and shapes the future.  They argue that it is important not to be beguiled by prospects of immediate payoff from investment in university research and teaching.

I have a duty of care in my position to help build the University’s capacity to respond to such challenges in ways that are consistent with our envisaged role.  It is my responsibility, primarily, to ensure that the people with scholarly expertise in the University have the room and resources to excel in their research, the opportunity through teaching to share their first-hand insights with their students (I note that Paul Krugman, the most recent winner of the Nobel Prize for his research in Economics, said when he began his thanks: you have to start with your teachers), and the freedom to speak out when they see the need to do so, and to put their expertise into the public arena to help inform public opinion.

Let me indicate the ways by which universities can contribute, and then suggest some options for public policy.

I work from the premise that the ability of a university to deliver its mission depends crucially on public support, appropriate regulatory frameworks and adequate funding.  Without the requisite public trust and support universities cannot survive in today’s world.

Interestingly, the available evidence from surveys of community attitudes suggests that when it comes to major social, environmental and economic issues, the public and the Government look to universities for analysis, understanding and solutions.

Some of the current areas are well known: economic uncertainty, climate change, the threat of pandemics, sources of terrorism and the potential of alternative energy, just to name a few.

One of the ways ANU engages with the broader Australian community and seeks to understand what we Australians think is via ANUpoll.

The ANUpoll, led by Professor Ian McAllister in our College of Arts and Social Sciences, differs from other opinion polls by placing public opinion in a broad policy context, and by benchmarking Australian against international opinion. It can also reveal trends in opinions over many decades, drawing on the wide range of public opinion polls conducted at ANU since the 1960s.

It tells us interesting things about Australians. The first Poll, released in April this year, revealed that Australians, by international standards, are much more positively disposed to high levels of government expenditure, particularly on health, education, the environment and police. The Poll tells us that there is a greater level of trust in government in Australia relative to other nations.

The second poll, released in September, sought the views of Australians on higher education. It found that Australians are concerned about fair and equitable access to our universities; they view university as one important way of improving the job prospects of their children, but not the only avenue to success; and they believe that the decline in public funding for universities has gone too far.

And we know from the  ANUpoll released today that concern about climate change is a big issue for the community. Global warming is perceived as a major long-term threat to the health of the planet by a large proportion of the population.  But there is no simple solution to this problem. It is one that crosses the boundaries of science, social sciences, health, economics, law, philosophy and more. It is a big challenge; and Universities have a key role to play in meeting it.

It is no coincidence that the Australian Government and state and territory governments turned to a respected academic to investigate the impact of climate change on Australia, and to propose potential solutions.  Professor Ross Garnaut in turn drew upon the work of many of his colleagues at ANU and other universities, for the latest data, for research, thinking and ideas to respond to what he identified as a ‘diabolical problem.’

Although from one discipline, Economics, Professor Garnaut’s report reflects the reality that at the heart of the climate change challenge is the need for a deep comprehension of interlaced, inseparable elements in highly complex systems. Perhaps no challenge facing us demands such an interdisciplinary approach. It is a challenge that the community expects universities to help to meet, and one that universities must help meet.

ANU is seeking to respond to that challenge with the formation of the ANU Climate Change Institute under the leadership of Professor Will Steffen.  This initiative represents a substantial effort by the University community to harness expertise across disciplines to extend knowledge about climate change – its drivers, its implications, the scope for positive responses to its impact, and possible correctives to its trajectory.

It will develop interdisciplinary research projects on climate change through the application of the University’s core capabilities around critical questions and issues.

It will develop high quality education programs aimed at meeting the national and international demand for qualified practitioners.  From 2009 ANU will offer an interdisciplinary Masters in Climate Change offered jointly between the Fenner School of Environment and Society and the Crawford School of Economics and Government. We believe it is the first of its kind in Australia.

The Climate Change institute will also engage globally, co hosting the International Alliance of Research Universities Copenhagen Climate Change Congress March 2009, and engaging with the Intergovernmental Panel on Climate Change (IPCC), the World Climate Research Programme (WCRP), and the International Geosphere-Biosphere Programme (IGBP) among others.

ANU is seeking to respond to the expectations of the Australian community and government that the national university seek to find solutions to the complex problems that confront us.  The reality is that the world’s problems are inter-connected, and universities need organisational flexibility to respond creatively to the need for new knowledge in addressing them.

While the world faces the ticking time bomb of climate change, and universities here and around the world seek new ways to address such complex problems, another time bomb is ticking for universities – Australia’s changing demography.

The Group of Eight, has released today a backgrounder on the challenge of population change. It estimates that at least one third of the annual number of Australian PhD graduates will be needed each year on average over the next decade merely to replace retirements from the academic workforce.  Currently three quarters of doctoral graduates flow into non-academic occupations, so without additional output we would see either a slowdown of doctoral supply to the broader labour market – at a time when the country is seeking to increase the capacity of the private and public sectors to absorb new knowledge – or a shortfall in academic positions, and this is without factoring in any increase in the number of institutions to meet growth in future demand for tertiary education.

It was therefore pleasing to see the interim Report of the House of Representatives Standing Committee on Industry, Science and Innovation on Research Training in Australia.  The committee is convinced, as are we, that there is a strong case for reform – and importantly, recommendations with budget implications have bi-partisan support.

The problem is sharper for fields of research from which the pull of labour market demand is strongest – such as in engineering or geology.  We should not assume that we can meet domestic shortfall readily through immigration in the future without being prepared to pay the prices that the intensifying international competition for intellectual talent is beginning to demand.

The educational attainment of the Australian workforce is considerably below that of the world’s leaders.  Two-thirds of Australia’s workforce over 25 years of age have no post-secondary qualifications, and one third have achieved less than upper-secondary education.  Only 77% of females and 68% of males aged 19 have completed Year 12 or equivalent.

To bring Australia up to an educated workforce equivalent to the world’s leaders would involve an additional 1 million people between 25 and 44 years getting tertiary education qualifications.  To achieve that lift in the domestic skills base is challenging.  Not to do it leaves a challenge of a different kind.

Additionally, for young people aged 15 to 25, that objective would require a much higher rate of participation and would mean finding new ways of promoting access and success among potential learners who lack readiness.  For equity as well as productivity purposes it is necessary to close the achievement gap without lowering educational standards.

Taken together these rising and diversifying forms of demand for learning cannot be accommodated within the current structure of tertiary education.  Greater diversification and innovation will be needed, including new types of providers, public and private, offering flexible, multi-modal access to learning opportunities.

We should not assume this will happen naturally.  Indeed we can expect resistance to it.  New incentives will be needed to overcome structural and cultural conservatism. Another reason to move from the ‘one size fits all’ approach and rather than looking for a simple solution develop a policy framework that promotes and supports difference through design rather than drift.

Twenty years ago the Australian Government expanded higher education on a foundation of three pillars:

  • An injection of additional public investment for growth in student places and new campuses
  • The provision of income-contingent loans to enable students to make a co-contribution to the costs of higher education without up-front financial barriers
  • A redistribution of existing resources from established universities to new institutions, notably through a ‘clawback’ of research funding.

The legacy of that approach is the persistence of sameness in the funding rates for teaching, the thin spreading of funding, unvalidated claims about standards of qualifications and excellence, and a levelling down of performance peaks. It was a ‘one size fits all approach’ and it was called a unified national system. In my experience over now 23 years, it was not national, rarely unified and hardly a system.

Expansion encouraged all universities to adopt broadly similar aspirations.

We are not alone.  Boulton and Lucas made that clear to us when they discussed the European dilemma: how to have research powerhouses amongst the world’s best and provide higher education for a growing proportion of the population.  They point out that “…excessive convergence towards a single model of the basic research-focused university, with a lack of differentiated purpose, structure and mission…” has resulted in at least 980 (European) universities claiming to “aspire to achieve international excellence in research.”  In the same article, they point out that:  “The US has resolved this dilemma. Fewer than 250 universities award postgraduate degrees and fewer than 100 are recognised as research intensive, with the rest devoted to teaching and scholarship.” And remember that the U.S has thousands of post-secondary institutions.

The approach in Europe and Australia, including the funding approach, probably impeded some universities from identifying and investing in niches neglected by the established research universities.

Regardless of their varying circumstances, universities have tended to use the rhetoric of excellence, rather than ‘fitness for purpose’.  But ‘excellence’ is an empty notion when used without reference to validated performance standards.

The desire of institutions to move ‘up whichever ladder’ distracts higher education from its public purposes, skews missions, and alters institutional priorities and spending to drift beyond the limits of their capacity.

We see this very clearly in Australia where the gap between the Go8 universities and others in terms of research performance has been widening, not narrowing, despite the processes and funding of the last twenty years.

Clearly it is sub-optimal and counter-productive for the country to continue diluting the public investment in proven research capacity and performance.  We certainly cannot afford to apply this flawed thinking of the past to the future expansion and diversification of tertiary education.

An unfortunate effect of rankings like the Shanghai Jiao Tong measures that are based on internationally comparable data relating primarily to research output quality is that, in a highly competitive context, they reinforce traditional academic norms and encourage what Frans Van Vught has termed the ‘reputation race’.

He noted recently that:

The average quality of the current European higher education and research system is good but its excellence is limited.  A diversification of missions and of research portfolios and funding levels, would be necessary to allow the occurrence of more European top universities.

We could say the same about the fair to average overall quality of Australian higher education and research, while noting the view strongly held in European quarters and which resonates here, that student mobility is promoted through avoidance of stratification of universities.  It is seen to be anti-egalitarian to invest in excellence – at least in intellectual endeavours, for we don’t appear to have the same reluctance in sport. We invest additionally in the best athletes and national teams because in sport, we understand the concept of investing in excellence and that high achievement encourages the others.

Now we need a new approach to meet new challenges alongside longstanding needs to enlarge educational participation and strengthen capacity.

The three pillars on which the current system was expanded twenty years ago have become unstable. The Government share of funding has shrunk.  Market sources of finance are playing a greater role.

The problem with reliance on market forces in higher education is its tendency to reduce diversity in the system, and raise costs for students and taxpayers.  The market can be afraid of difficult or intellectually challenging ideas where the payoff isn’t easily predictable or easily apparent.

Clearly we can’t and don’t want to wind back the clock to a centrally-planned model of higher education.  Equally we cannot rely simply on the market.  A more flexible regulatory and financing approach is necessary, and we need to give form to the notion of mission-based funding compacts for each university that Labor proposed ahead of the 2007 election and has indicated subsequently its intention to progress in government.  I note that Deputy Prime Minister Gillard and Minister Carr have repeatedly declared their ambitions for the universities: structurally reformed, culturally reformed, socially inclusive and internationally competitive.  Hard to argue against – possible if not easy to achieve.

It is not enough to give universities what they ask for: more money, less regulation and more autonomy.  Or for universities to expect to be given what they ask for.  Much as we might be able to argue the compelling case for better generic funding, I can’t see that we stand a chance without conceding substantial reform and improvement.

To achieve what we need, we need not just Compacts but Compacts with teeth. We need Compacts that will hold us to hard decisions, validate and use evidence to agree and provide adequate support for our strengths and not simply endorse what we say about ourselves.

Compacts will fail to provide bold and different approaches if they are tied up in second-order metrics for shallow accountability reporting.

There must be some sharp incisors to bite through the surface of universities’ claims.  I suggest that the Government should complement negotiating teams of departmental officials with people with university experience (possibly international) who can exercise the discriminating judgements that will be necessary to validate the claims of universities against their missions.

The two main components of Compacts that may be on offer are the full funding of research and greater flexibility in financing of higher education.   I see these compacts working along the lines of the recent COAG reforms of Commonwealth-State specific purpose programs, to support additional performance-based actions on top of adequate (better) funding for core activities.  These would be significant reforms and I understand they need to be mutually beneficial for universities and the supporting community.

Hence, in return for full economic costs of research, I believe it is more than reasonable that universities should be able to demonstrate better knowledge of their costs, proper pricing, avoidance of internal cross subsidies, and improved management of their estates.

In return for improved funding, greater financing flexibility, and, for some universities, ‘deregulation’, I believe universities should be prepared to expand scholarships and bursaries for needy students, extend their outreach to raise aspirations and readiness of students from disadvantaged areas, and give greater attention to student success.

In a truly differentiated system it will be necessary to provide better support for students.  We must have a system that allows talented students, regardless of their life circumstances, to go to the university that best meets their ambitions and interests.  This will mean tackling the issue of income support, including rental assistance, if we are to develop a comprehensive strategy for improving the socio-economic mix of student enrolments in a markedly differentiated university system.

The participation rate of disadvantaged groups in higher education, notably students from low socio-economic backgrounds, Indigenous Australians, and Australians from regional and remote areas, remains low.

For many of these potential students and their parents, the additional education costs that cannot be deferred in the same way as HECS constitute an insurmountable burden – living expenses remain the major financial barrier to participation. Yet the system of student income support has not been reviewed by government since 1992.

I believe the Government is heading in the right direction with its three pillars of reform:

  • Expanding participation for the purposes of social inclusion and productivity improvement.
  • Focussing on internationally benchmarked quality as the key driver of investment in research and research training. An additional benefit of which might be to dispense with ‘perceptions’ and replace them with proven performance.
  • Increasing university flexibility and harmonising competitive processes with national public priorities.

I can simply enjoin the Government to stay on track, hold to the line and not get distracted by those who will seek a weaker course.  Even if there is to be a shortfall between the investment increases we need and the capacity of the economy to afford them for the time being, it is imperative that there is no compromise on the goals we set for ourselves and the standards we set for their achievement.

Anything less would sell Australia short.

Ian Chubb

Times Higher Education – QS World University Rankings (2008): a niche industry in formation?

The new Times Higher Education – QS World University Rankings (2008) rankings were just released, and the copyright regulations deepen and extend, push and pull, enable and constrain.  Global rankings: a niche industry in formation?

Kris Olds

New 2008 Shanghai rankings, by rankers who also certify rankers

Benchmarking, and audit culture more generally, are clearly the issues of the week. Following our coverage of a new Standard and Poor’s credit rating report regarding UK universities (‘Passing judgment’: the role of credit rating agencies in the global governance of UK universities‘), the Chronicle of Higher Education just noted that the 2008 Academic Ranking of World Universities (ARWU) (published by Shanghai Jiao Tong University) has been released on the web.

We’ve had more than a few stories about the pros and cons of rankings (e.g., 19 November’s  ‘University rankings: deliberations and future directions‘), but, of course, curiosity killed the cat so I eagerly plunged in for a quick scan.

Leaving aside the individual university scale, one of the most interesting representations of the data they collected, suspect though it might be, is this one:

The geographies, especially the disciplinary/field geographies, are noteworthy on a number of levels. The results are sure to propel the French (currently holding the rotating presidency of the Council of the European Union) into further action re., the deconstruction of the Shanghai methodology, and the development of alternatives (see my reference to this issue in the 6 July entry titled ‘Euro angsts, insights and actions regarding global university ranking schemes’).

I’m also not sure we can rely upon the recently established IREG-International Observatory on Academic Ranking and Excellence to shed unbiased light on the validity of the above table, and all the rest that are sure to be circulated, at the speed of light, through the global higher ed world over the next month or more. Why? Well, the IREG-International Observatory on Academic Ranking and Excellence, established on 18 April 2008, is supposed to:

review the conduct of “academic ranking” and expressions of “academic excellence” for the benefit of higher education, its stake-holders and the general public. This objective will be achieved by way of:

  • improving the standards, theory and practice in line with recommendations formulated in the Berlin Principles on Ranking of Higher Education Institutions;
  • initiating research and training related to ranking excellence;
  • analyzing the impact of ranking on access, recruitment trends and practices;
  • analyzing the role of ranking on institutional behavior;
  • enhancing public awareness and understanding of academic work.

Answering the explicit request of ranking bodies, the Observatory will review and assess selected rankings, based on methodological criteria and deontological standards of the Berlin Principles on Ranking of Higher Education Institutions. Successful ranking will be entities to declare they are “IREG Recognized”.

Now, who established the IREG-International Observatory on Academic Ranking and Excellence? A variety of ‘experts’ (photo below), including people associated with said Shanghai rankings, as well as U.S. News & World Report.

Forgive me if I am wrong, but is it not illogical, best intentions aside, to have rankers themselves on boards of institutions that seek to review “the conduct of ‘academic ranking’ and expressions of ‘academic excellence’ for the benefit of higher education, its stake-holders and the general public”, while also handing out IREG Recognized certifications (including to themselves, I presume)?

Kris Olds

Euro angsts, insights and actions regarding global university ranking schemes

The Beerkens’ blog noted, on 1 July, how the university rankings effect has even gone as far as reshaping immigration policy in the Netherlands. He included this extract, from a government policy proposal (‘Blueprint for a modern migration policy’):

Migrants are eligible if they received their degree from a university that is in the top 150 of two international league tables of universities. Because of the overlap, the lists consists of 189 universities…

Quite the authority being vetted in ranking schemes that are still in the process of being hotly debated!

On this broad topic, I’ve been traveling throughout Europe this academic year, pursuing a project not related to rankings, yet again and again rankings come up as a topic of discussion, reminding us of the de-facto global governance power of rankings (and the rankers). Ranking schemes, especially the Shanghai Jiao Tong University’s Academic Ranking of World Universities, and The Times Higher-QS World University Rankings are generating both governance impacts, and substantial anxiety, in multiple quarters.

In response, the European Commission is funding some research and thinking on the topic, while France’s new role in the rotating EU Presidency is supposed to lead to some further focus and attention over the next six months. More generally, here is a random list of European or Europe-based initiatives to examine the nature, impacts, and politics of global rankings:

And here are some recent or forthcoming events:

Yet I can’t help but wonder why Europe, which generally has high quality universities, despite some significant challenges, did not seek to shed light on the pros and cons of the rankings phenomenon any earlier. In other words, despite the critical mass of brainpower in Europe, what has hindered a collective, integrated, and well-funded interrogation of the ranking schemes from emerging before the ranking effects and path dependency started to take hold? Of course there was plenty of muttering, and some early research about rankings, and one could argue that I am viewing this topic through a rear view mirror, but Europe was, arguably, somewhat late in digging into this topic considering how much of an impact these assessment cum governance schemes are having.

So, if absence matters as much as presence in the global higher ed world, let’s ponder the absence of a serious European critique, or at least interrogation of, rankings and the rankers, until now. Let me put forward four possible explanations.

First, action at a European higher education scale has been focused upon bringing the European Higher Education Area to life via the Bologna Process, which was formally initiated in 1999. Thus there were only so many resources – intellectual and material – that could be allocated to higher education, so the Europeans are only now looking outwards to the power of rankings and the rankers. In short, key actors with a European higher education and research development vision have simply been too busy to focus on the rankings phenomenon and its effects.

A second explanation might be that European stakeholders are, deep down, profoundly uneasy about competition with respect to higher education, of which benchmarking and ranking is a part. But, as the Dublin Institute of Technology’s Ellen Hazelkorn notes in Australia’s Campus Review (27 May 2008):

Rankings are the latest weapon in the battle for world-class excellence. They are a manifestation of escalating global competition and the geopolitical search for talent, and are now a driver of that competition and a metaphor for the reputation race. What started out as an innocuous consumer product – aimed at undergraduate domestic students – has become a policy instrument, a management tool, and a transmitter of social, cultural and professional capital for the faculty and students who attend high-ranked institutions….

In the post-massification higher education world, rankings are widening the gap between elite and mass education, exacerbating the international division of knowledge. They inflate the academic arms race, locking institutions and governments into a continual quest for ever increasing resources which most countries cannot afford without sacrificing other social and economic policies. Should institutions and governments allow their higher education policy to be driven by metrics developed by others for another purpose?

It is worth noting that Ellen Hazelkorn is currently finishing an OECD-sponsored study on the effects of rankings.

In short, institutions associated with European higher education did not know how to assertively critique (or at least interrogate) ranking schemes as they never realized, until more recently, how ranking schemes are deeply geopolitical and geoeconomic vehicles that enable the powerful to maintain their standing, and harness yet even more resources inward. Angst regarding competition dulled senses to the intrinsically competitive logic of global university ranking schemes, and the political nature of their being.

Third, perhaps European elites, infatuated as they are with US Ivy League universities, or private institutions like Stanford, just accepted the schemes for the results summarized in this table from an OECD working paper (July 2007) written by Simon Marginson and Marijk van der Wende:

for they merely reinforced their acceptance of one form of American exceptionalism that has been acknowledged in Europe for some time. In other words, can one expect critiques of schemes that identify and peg, at the top, universities that many European elites would kill to send their children to, to emerge? I’m not so sure. As with Asia (where I worked from 1997-2001), and now in Europe, people seem infatuated with the standing of universities like Harvard, MIT, and Princeton, but these universities really operate in a parallel universe. Unless European governments, or the EU, are willing to establish 2-3 universities like King Abdullah University of Science and Technology (KAUST) in Saudi Arabia recently did with a $10 billion endowment, then angling to compete with the US privates should just be forgotten about. The new European Institute of Innovation and Technology (EIT) innovative as it may become, will not rearrange the rankings results, assuming they should indeed be rearranged.

Following what could be defined as a fait accompli phase, national and European political leaders came to progressively view the low status of European universities in the two key rankings schemes – Shanghai, and Times Higher – as a problematic situation. Why? The Lisbon Strategy emerges in 2000, was relaunched in 2005, and slowly starts to generate impacts, while also being continually retuned. Thus, if the strategy is to “become the most competitive and dynamic knowledge-based economy in the world, capable of sustainable economic growth with more and better jobs and greater social cohesion”, how can Europe become such a competitive global force when universities – key knowledge producers – are so far off fast emerging and now hegemonic global knowledge production maps?

In this political context, especially given state control over higher education budgets, and the relaunched Lisbon agenda drive, Europe’s rankers of ranking schemes were then propelled into action, in trebuchet-like fashion. 2010 is, after all, a key target date for a myriad of European scale assessments.

Fourth, Europe includes the UK, despite the feelings of many on both sides of the Channel. Powerful and well-respected institutions, with a wealth of analytical resources, are based in the UK, the global centre of calculation regarding bibliometrics (which rankings are a part of). Yet what role have universities like Oxford, Cambridge, Imperial College, UCL, and so on, or stakeholder organizations like Universities UK (UUK) and the Higher Education Funding Council for England (HEFCE), played in shedding light on the pros and cons of rankings for European institutions of higher education? I might be uninformed but the critiques are not emerging from the well placed, despite their immense experience with bibliometrics. In short as rankings aggregate data that works at a level of abstraction that hoves universities into view, and places UK universities highly (up there with Yale, Harvard and MIT), then these UK universities (or groups like UUK) will inevitably be concerned about their relative position, not the position of the broader regional system of which they are part, nor the rigour of the ranking methodologies. Interestingly, the vast majority of the above initiatives I listed only include representatives from universities that are ranked relatively low by the two main ranking schemes that now hold hegemonic power. I could also speculate on why the French contribution to the regional debate is limited, but will save that for another day.

These are but four of many possible explanations for why European higher education might have been relatively slow to grapple with the power and effects of university ranking schemes considering how much angst and impacts they generate. This said, you could argue, as Eric Beerkens has in the comments section below, that the European response was actually not late off the mark, despite what I argued above. The Shanghai rankings emerged in June 2003, and I still recall the attention they generated when they were first circulated. Three to five years for sustained action in some sectors is pretty quick, while in some sectors it is not.

In conclusion, it is clear that Europe has been destabilized by an immutable mobile – a regionally and now globally understood analytical device that holds together, travels across space, and is placed in reports, ministerial briefing notes, articles, PPT presentations, newspaper and magazine stories, etc. And it is only now that Europe is seriously interrogating the power of such devices, the data and methodologies that underly their production, and the global geopolitics and geoeconomics that they are part and parcel of.

I would argue that it is time to allocate substantial European resources to a deep, sustained, and ongoing analysis of the rankers, their ranking schemes, and associated effects. Questions remain, though, about how much light will be shed on the nature of university rankings schemes, what proposals or alternatives might emerge, and how the various currents of thought in Europe converge or diverge as some consensus is sought. Some institutions in Europe are actually happy that this ‘new reality’ has emerged for it is perceived to facilitate the ‘modernization’ of universities, enhance transparency at an intra-university scale, and elevate the role of the European Commission in European higher education development dynamics. Yet others equate rankings and classification schema with neoliberalism, commodification, and Americanization: this partly explains the ongoing critiques of the typology initiatives I linked to above, which are, to a degree, inspired by the German Excellence initiative, which is in turn partially inspired by a vision of what the US higher education system is.

Regardless, the rankings topic is not about to disappear. Let us hope that the controversies, debates, and research (current and future) inspire coordinated and rigorous European initiatives that will shed more light on this new form of defacto global governance. Why? If Europe does not do it, no one else will, at least in a manner that recognizes the diverse contributions that higher education can and should make to development processes at a range of scales.

Kris Olds

23 July update: see here for a review of a 2 juillet 2008 French Senate proposal to develop a new European ranking system that better reflects the nature of knowledge production (including language) in France and Europe more generally.  The full report (French only) can be downloaded here, while the press release (French only) can be read here.  France is, of course, going to publish a Senate report in French, though the likely target audience for the broader message (including a critique of the Shanghai Jiao Tong University’s Academic Ranking of World Universities) only partially understands French.  Yet in some ways it would have been better to have the report released simultaneously in both French and English.  But the contradictions of France critiquing dominant ranking schemes for their bias towards the English language, in English, was likely too much to take. In the end though, the French critique is well worth considering, and I can’t help but think that the EU or one of the many emerging initiatives above would be wise to have the report immediately translated and placed on some relevant websites so that it can be downloaded for review and debate.

Australia, be careful what you wish for

Editor’s note: this is the second contribution to GlobalHigherEd by Ellen Hazelkorn, Director, Dublin Institute of Technology, Ireland. Ellen is also Dean of the Faculty of Applied Arts, and Director, Higher Education Policy Research Unit (HEPRU) at DIT. She also works with the OECD’s Programme for Institutional Management of Higher Education (IMHE), including on the impact of rankings on higher education. Ellen’s first, and related, entry for GlobalHigherEd is titled ‘Has higher education become a victim of its own propaganda?‘.
~~~~~~~~~~~~~~~~

When Julie Gillard, MP, the new Australian Labour Party Deputy Prime Minister and Minister for Education, Employment, Workplace Relations and Social Inclusion, opened the recent AFR HE conference, her speech was praised as being the “most positive in 12 years”. Gillard’s speech combined a rousing attack on the conservative Howard government’s policies towards higher education, and society generally, with the promise to usher in “a new era of cooperation…For the first time in many years, Australian universities will have a Federal government that trusts and respects them”. But, are the universities reading the tea-leaves correctly?

Because attention is focused on higher education as a vital indicator of a country’s economic super-power status, universities are regarded as ‘ideal talent-catching machines’ with linkages to the national innovation system. Australia, a big country with a small population, is realising that its global ambitions are constrained by accelerating competition and the vast sums which other countries and regions, e.g., Europe and the US, seem able to invest.

Its dependence on international students, which comprise 17.3% of the student population exceeds the OECD average of 6.7%, but Australia lags behind in the vital postgraduate/PhD student market. Here, international students comprise only 17.8% of the total student population while universities elsewhere have up to 50%. Thus, there is concern that, on a simple country comparison, only 2 Australian universities are included in the top 100 on the Shanghai Jiao Tong ARWU or 8 in the ‘less-considered’ Times QS Ranking of World Universities – albeit if the data were recalibrated for population or GDP, Australia is fourth on both measures sharing this top four ranking with Hong Kong, Singapore, Switzerland and New Zealand. According to Simon Marginson, Australia lacks “truly stellar research universities, now seen as vital attractors of human, intellectual and financial capital in a knowledge economy”

anu.jpgIn response, Ian Chubb, Vice Chancellor, Australia National University (pictured to the left), says the government should abandon its egalitarian policies and preferentially fund a select number of internationally competitive universities while Margaret Gardner, Vice Chancellor, RMIT University, says Australia needs a top university system.

Australia may be able to reconcile these competing and divergent views through more competitive and targeted funding linked to mission (see below on compacts) or, perhaps more controversially, by using the forthcoming HE review (see below) to reaffirm the principles of educational equity while using the complementary innovation review to build-up critical mass in designated fields in select universities. Whichever direction it chooses, it needs to ensure pursuit of its slice of the global knowledge society doesn’t simply become advantageous for the south-east corner of its vast landscape.

Indeed, those who argue that government should fund institutions on the basis of their contribution to the economy and society may find that the metrics used are less kind to them than they think. Not only does research suggest some universities over-inflate their claims, (see Siegfried et al, ‘The Economic Impact of Colleges and Universities’) but better value-for-money and social return on investment may be achievable from improving pre-school or primary education, job chances for 16-19 year olds, building a hospital, or other large-scale facility in the vicinity. Another possibility, in a country which ostensibly values egalitarianism and is committed to balanced regional growth, is that universities ranked lower may become preferred beneficiaries at the expense of more highly ranked institutions. This is exactly the argument that underpinned the first Shanghai Jiao Tong ranking; in other words, the team was anxious to show how poorly Chinese universities were doing vis-à-vis other countries. While Australia’s Go8 universities may seek to use this argument to their advantage, they should also be mindful that poor rankings could incentivize a government to spend more financial resources on weaker institutions (see Zhe Jin and Whalley, 2007). Or, rather than using citations – which it could be argued refers to articles read only by other academics – as a measure or metric of output, impact measurements – including community/regional engagement – could be used to measure contribution to the economy and society. This format may favor a different set of institutions.

heausreview.jpgThe Australian government has begun a review of its HE system. One likely outcome will be the use of negotiated ‘compacts’ between universities and the government which will, in turn, become the basis for determining funding linked to mission and targets. The concept was initially presented in the Australian Labour Party white paper Australia’s Universities: Building our Future in the World (2006):

The mission-based compacts will facilitate diversification of the higher education system, wider student choice and the continuation of university functions of wider community benefit that would otherwise be lost in a purely market-driven system.

Broadly welcomed, these ‘compacts’ are being wildly interpreted as a method of institutional self-definition, on the one hand, or a recipe for micro-management, on the other. They appear to share some characteristics of the Danish system of performance contracts, mentioned in the University Act of 2003 (see section 10.8), and are in line with a trend away from government regulation to steerage by planning. The actual result will probably be somewhere in-between.However, given the time and resources required on both sides to ‘negotiate’, it seems clear this may not be the panacea many universities believe it to be. How much institutional autonomy or self-declaration is realistically possible? At what stage in the negotiations does the government announce the ‘end of talking’?

Another reality-check may be in store as Australian universities celebrate replacement of the Research Quality Framework with the new Excellence in Research for Australia (ERA) initiative which combines metrics with peer evaluation. Whatever arguments against the previous system, several HE leaders claim they had reached a point of near-satisfaction about how research was to be measured, including measuring not just output but also outcome and impact. These issues may need to be re-negotiated under the new system. Another unknown is the extent to which the ‘outcome’ of the ERA itself is linked to ‘compacts’ and research prioritization and concentration – with implications not just for existing fields but new fields of discovery and new research teams.

The challenges for institutions and governments are huge, and the stakes are high and getting higher. To succeed, institutions need to employ the same critical rigorous approach to their arguments that they would expect from their students. Universities everywhere should take note.

Ellen Hazelkorn

Has higher education become a victim of its own propaganda?

eh.jpgEditor’s note: today’s guest entry was kindly written by Ellen Hazelkorn, Director, and Dean of the Faculty of Applied Arts, and Director, Higher Education Policy Research Unit (HEPRU), Dublin Institute of Technology, Ireland. She also works with the OECD’s Programme for Institutional Management of Higher Education (IMHE). Her entry should be read in conjunction with some of our recent entries on the linkages and tensions between the Bologna Process and the Lisbon Strategy, the role of foundations and endowments in facilitating innovative research yet also heightening resource inequities, as well as the ever present benchmarking and ranking debates.

~~~~~~~~~

councilpr.jpgThe recent Council of the European Union’s statement on the role of higher education is another in a long list of statements from the EU, national governments, the OECD, UNESCO, etc., proclaiming the importance of higher education (HE) to/for economic development. While HE has long yearned for the time in which it would head the policy agenda, and be rewarded with vast sums of public investment, it may not have realised that increased funding would be accompanied with calls for greater accountability and scrutiny, pressure for value-for-money, and organisational and governance reform. Many critics cite these developments as changing the fundamentals of higher education. Has higher education become the victim of its own propaganda?

At a recent conference in Brussels a representative from the EU reflected on this paradox. The Lisbon Strategy identified a future in which Europe would be a/the leader of the global knowledge economy. But when the statistics were reviewed, there was a wide gap between vision and reality. The Shanghai Academic Ranking of World Universities, which has become the gold standard of worldwide HE rankings, has identified too few European universities among the top 100. This was, he said, a serious problem and blow to the European strategy. Change is required, urgently.

sciencespo.jpgUniversity rankings are, whether we like it or not, beginning to influence the behaviour of higher education institutions and higher education policy because they arguably provide a snap-shot of competition within the global knowledge industrial sector (see E. Hazelkorn, Higher Education Management and Policy, 19:2, and forthcoming Higher Education Policy, 2008). Denmark and France have introduced new legislation to encourage mergers or the formation of ‘pôles’ to enhance critical mass and visibility, while Germany and the UK are using national research rankings or teaching/learning evaluations as a ‘market’ mechanism to effect change. Others, like Germany, Denmark and Ireland, are enforcing changes in institutional governance, replacing elected rectors with corporate CEO-type leadership. Performance funding is a feature everywhere. Even the European Research Council’s method of ‘empowering’ (funding) the researcher rather than the institution is likely to fuel institutional competition.

In response, universities and other HEIs are having to look more strategically at the way they conduct their business, organise their affairs, and the quality of their various ‘products’, e.g., educational programming and research. In return for increased autonomy, governments want more accountability; in return for more funding, governments want more income-generation; in return for greater support for research, governments want to identify ‘winners’; and in return for valuing HE’s contribution to society, governments want measurable outputs (see, for example, this call for an “ombudsman” for higher education in Ireland).

European governments are moving from an egalitarian approach – where all institutions are broadly equal in status and quality – to one in which excellence is promoted through elite institutions, differentiation is encouraged through competitive funding, public accountability is driven by performance measurements or institutional contacts, and student fees are a reflection of consumer buoyancy.

But neither the financial costs nor implications of this strategy – for both governments and institutions – have been thought through. The German government has invested €1.9b over five years in the Excellence Initiative but this sum pales into insignificance compared with claims that a single ‘world class’ university is a $1b – $1.5b annual operation, plus $500m with a medical school, or with other national investment strategies, e.g., China’s $20b ‘211 Project’ or Korea’s $1.2b ‘Brain 21’ programme, or with the fund-raising capabilities of US universities (‘Updates on Billion-Dollar Campaigns at 31 Universities’; ‘Foundations, endowments and higher education: Europe ruminates while the USA stratifies‘).

Given public and policy disdain for increased taxation, if European governments wish to compete in this environment, which policy objectives will be sacrificed? Is the rush to establish ‘world-class’ European universities hiding a growing gap between private and public, research and teaching, elite and mass education? Evidence from Ireland suggests that despite efforts to retain a ‘binary’ system, students are fleeing from less endowed, less prestigious institutes of technology in favour of ‘universities’. At one stage, the UK government promoted the idea of concentrating research activity in a few select institutions/centres until critics, notably the Lambert report and more recently the OECD, argued that regionality does matter.

Europeans are keen to establish a ‘world class’ HE system which can compete with the best US universities. But it is clear that such efforts are being undertaken without a full understanding of the implications, intended and unintended.

Ellen Hazelkorn

OECD ministers meet in January to discuss possible evaluation of “outcomes” of higher education

Further to our last entry on this issue, and a 15 November 2007 story in The Economist, here is an official OECD summary of the Informal OECD Ministerial Meeting on evaluating the outcomes of Higher Education, Tokyo, 11-12 January 2008.  The meetings relate to the perception, in the OECD and its member governments, of an “increasingly significant role of higher education as a driver of economic growth and the pressing need for better ways to value and develop higher education and to respond to the needs of the knowledge society”.

Global university rankings 2007: interview with Simon Marginson

Editor’s note: The world is awash in discussion and debate about university (and disciplinary) ranking schemes, and what to do about them (e.g.  see our recent entry on this). Malaysia, for example, is grappling with a series of issues related to the outcome of the recent global rankings schemes, partly spurred on by ongoing developments, but also a new drive to create a differentiated higher education system (including so-called “Apex” universities). In this context Dr. Sarjit Kaur, Associate Research Fellow, IPPTN, Universiti Sains Malaysia, conducted an interview with Simon Marginson, Australian Professorial Fellow and Professor of Higher Education, Centre for the Study of Higher Education, The University of Melbourne. The interview was conducted on 22 November 2007.
~~~~~~~~~~~~~~~~~~~~~~~

Q: What is your overall first impression of the 2007 university rankings?

A: The Shanghai Jiao Tong (SHJT) rankings came out first and the ranking is largely valid. The outcome shows a domination of the large size based universities in the Western world, principally English-speaking countries and principally the US. There are no surprises in that when you look at the fact that the US spends seven times as much on higher education as the next nation, which is Japan, and that is seven times as much as a very big advantage in a competitive sense. The Times Higher Education Supplement (THES) rankings are not valid, in my view, I mean you have a survey which gets 1% return, is biased to certain countries and so on. The outcome tends to show that similar kinds of universities do well as in the top 50 anyway as in the SHJT because research-strong universities also have strong reputations and that shows up strongly in the THES, but the Times one is more plural with major universities in a number of countries (the oldest, largest, and best established universities in a number of countries) appear in the top 100 who aren’t strong enough in research terms to appear in the SHJT. But I don’t put any real value on the Times results – they go up and down very fast. Institutions that are in the top 100 then disappearing from the top 200 two years later, like Universiti Malaya did. It doesn’t mean too much.

Q: In both global university rankings, UK and US universities still dominate the top ten places. What’s your comment on this?

A: Well, it’s predictable that they would dominate in terms of a research measure because they have the largest concentration of research power – publications in English language journals, which mostly are edited from these countries and to their scholars in numbers. The Times is partly driven by research (only 1/5 of it is) and partly driven by the number of international students that people have – they tend to go to the UK and Australia more than they go to US but they tend to be in English-speaking countries as well. At times one half (50%) is determined by reputation as they’re reputational surveys at which one is 40% and the other is 10%. Now, reputation tends to follow established prestige and the English language, where the universities have the prestige as well. But the other factor is that the reputational surveys are biased in favour of countries which use the Times, read the Times and know the Times (usually in the British Empire) so it tends to be UK, Australia, New Zealand, Singapore, Malaysia and Hong Kong that put in a lot of survey returns whereas the Europeans don’t put in many; and many other Asian countries don’t put in many. So, that’s another reason why the English universities would do well. In fact the English universities do very well in the Times rankings – much better than they should really, considering their research strengths.

Q: What’s your comment on how most Asian universities performed in this year’s rankings?

A: Look, I think the SHJT is the one to watch because that gives you realistic measures of performance. The problem with SHJT is it tends to be a bit delayed – so that there’s a delay between the time you performed and the time it shows up in the rankings because the citation and publication measures are operating off the second half of the 90s; in the HiCis, Thomson HiCis count used by SHJT. So when the first half of the 2000 starts to show up, you’re going to see the National University of Singapore go up from the top 200 into the top 100 pretty fast. You will expect the Chinese universities will follow as well, a bit slower, so that Tsinghua and Peking Uni, Fudan, and Jiao Tong itself will move towards the top 200 and top 100 over time because they are really building up to many strengths. That would be a useful trend line to follow. Korean universities are also going to improve markedly in the rankings over time, with Seoul National leading the way. Japan’s already a major presence in the rankings of course. I wouldn’t expect any other Asian country, at this point, to start to show up strongly. It’s not the reason why the Malaysian universities should suddenly move up the research ranking table when they are not investing any more in research than they were before. It will be a long time before Malaysia starts creating an impact in the SHJT because if those China policy tomorrow requires universities to build on their basic research strengths which will involve sending selected people off abroad all the time for PhDs, establishing enough strengths in USM, UKM and UM and a couple more for major research bases at home and to have the capacity to train people at PhD level at home and so on, and be performing a lot of basic research. To do that you have to pay competitive salaries, you got to (like Singapore does) bring people back who might otherwise want to work in the US or UK…and that means paying something like UK salaries or if not, American ones. Then you’ll settle them down, and it’ll take them 5 years before they do their best output. Malaysia is perhaps better at marketing than it is with research performance because it has an International Education sector and because the government is quite active in promoting the university sector offshore and that’s good and that’s how it should be.

Q: What about the performance of Australian universities?

A: They performed as they should in the SHJT, which is to say we got 2 in the top 100. That’s not very good in the sense that when you look at Canada which is a country which is only slightly wealthier and about 2% bigger and a similar kind of culture and quality and it does much better. I mean it has 2 in the top 40 because it spends a lot more on research. Australia would do better in the SHJT if more than just ANU was being funded specially for research. Sydney, Queensland and West Australia were in the top 150, which is not a bad result and New South Wales is in the top 200, Adelaide and Monash were in the top 300 as is Macquarie I think. So it’s 9 in the top 300, which is reasonably good but there’s none in the top 50, which is not good. Australia is not there yet in being regarded a serious research power. In the THES rankings, Australian universities did extremely well because the survey vastly favours those countries which use the Times, know the Times and tend to return the surveys in higher than average numbers and Australia is one of those and because Australia’s International education sector is heavily promoted and because Australia has a lot of international students, which pushes its position up in the Internationalisation indicator. So Australia comes out scoring well in the THES rankings, having 11 universities in the top 100 and that’s just absurd when you look at the actual strengths of Australian universities and even their reputation worldwide, and they’re not strong in the same sense overall as research-based institutions. I’d say the same for British universities too – I mean they did too well. I mean University College London (UCL) this year is 9th in the ranking and stellar institutions like Stanford and University of California Berkeley were 19th and 22nd — this doesn’t make any sense and it’s a ludicrous result.

Q: It is widely acknowledged that in the higher education sector the keys to global competition are research performance and reputation. Do you think the rankings capture these aspects competently?

A: Well, I think the SHJT is not bad with research performance. There’s a lot of ways you can do this and I think using Nobel Prize is not really a good indicator because while the people who receive the prize in the Science and Economics are usually good people; someone said people who are just as good just never receive a prize – you know, because it’s submission-based and it’s all very open; it’s arguable as to whether it’s pure merit. I mean anyone who gets a prize has merit but it doesn’t mean it’s the highest merit of anyone possible that year. Given that the Nobel counts towards 30% of the total, I think it’s probably a little exaggerated in its impact. So I’d take that out and I’ll use something like the citation per head measure, which also appears in the THES rankings actually using similar data but which can be done with the SHJT database as well. But there are a lot of problems – one of the issues is the fact that for some disciplines, for example, cite more than others. Medicine cites much more heavily than engineering so that a university strong in medicine tends to look rather good in the Jiao Tong indicators compared to universities strong in engineering and many of the Chinese and universities in Singapore and Australia too are particularly strong in engineering so that doesn’t help them. But once you start to manipulate the data, you’re on a bit of a slippery slope downwards because there are many other ways you can do it. I think the best measures are probably those developed by Leiden University citation where they control for the size of the university and they control for the disciplines. They don’t take it any further than that and they are very careful and transparent when they do that. So that’s probably the best single set of research outcomes measures but there are always arguments both ways when you’re trying to create a level playing field and recognising true merit. The Times doesn’t measure reputation well when you have a survey with a 1% return rate and which is biased towards 4 or 5 countries and under-represents most of the others. That’s not a good way to measure reputation so we don’t know reputation from the point of view of the world, as the THES are basically UK university rankings.

Q: What kinds of methodological criticisms would you have against the SHJT in comparison to the THES?

A: I don’t think there’s anything that the THES does better; except that the SHJT uses the citation per head measure which is probably a good idea. The SHJT uses a per head measure of research performance as a whole which is probably a less valuable way to take into account size but I think the way Leiden does it is better than either in terms of size measure. That’s the only thing the THES does better and everything else the THES does a good deal worse so I wouldn’t want to implicate the THES in any circumstances. The other problem with the Times is the composite indicator — how do you equate student-staff ratio which is meant to be measured with teaching capacity? How can you give that 20% to research and 20% to reputation? What does that mean? Why? Why not give teaching 50%, why not give research 50%? I mean it’s so arbitrary. There’s no theory at the base of this. It’s just people sitting in a market research company and Times office, guessing about how to best manipulate the sector. The Social Science should be very critical of this kind of thing, regardless of how well or how badly the university is doing.

Q: In your opinion, have these global university rankings gained the trust or the confidence of mainstream public and policy credibility?

A: They’ll always get publicity if they come from apparently authoritative sources and they appear to cover the world. So it’s possible, as with the Times, to develop a bad ranking and get a lot of credibility but the Times now has lost a good deal of ground and the reason why it’s losing credibility, first in the informed circles like Social Science, then with the policy makers, then with the public and the media. And it’s results are so volatile and universities get treated so harshly by going up and down so fast when their performance is not changing. So everyone is now beginning to realize that there is no real relationship between the merit and the university and the outcome of the ranking. And once that happens, the ranking has no ground – it’s gone, it’s finished; and that’s what’s happening to the Times. I mean it will keep coming out for a bit longer but it might stop altogether because its credibility is really reducing now.

Q: To what extent do university rankings help intensify global competition for HiCi researchers or getting international doctoral students or the best postgraduate students?

A: I think the Jiao Tong has had a big impact in focusing attention on the number of countries in getting universities into the top 100 or even the top 500 for that matter (and in some countries the top 50 or top 20) and that is leading in some nations, you could name China and Germany for example, as places where the concentration of research investment is occurring to try to boost the position on individual universities and even disciplines because Jiao Tong also measures mean in 5 discipline areas as well, as does the Times. I think that kind of policy effect will continue and certainly by having a one world ranking, which is incredible such as the Jiao Tong, will help intensify global competition and lead everyone to see the world in terms of a single competition in higher education, particularly in research performance, which focuses attention on the high quality of researchers who comprise most of the research performers. I mean, studies show that 2-5% of researchers in most countries produce more than half of the outcomes in terms of publications and grants. Having this is helpful and it’s a good circumstance.

Q: Do you have any further comments on the issue of whether university rankings are on the right track? What’s your prediction for the future?

A: I think bad rankings tend to undermine themselves over time because their results are not credible. Good ranking systems are open to refinement and improvement and they tend to get stronger, and that’s exactly the case with the Jiao Tong. I think the next frontier with the rankings is the measurement of teaching performance and student quality. The added point of exit — whether it’s done as an evaluated thing or just as a once-off measure. The OECD is in the early stages of developing internationally comparable indicators of student competence – it might use just competency tests like problem solving skills, it may use discipline-based tests in areas like Physics which are common to many countries. It’s more difficult to use disciplines but on the other hand if you just use skills without knowledge, it’s also limited and perhaps open to question. The OECD has got many steps and problems in trying to do this and there are questions as to how this can be done — whether it’s within the frame of the institution or whether through national systems. There are many other questions about this and the technical problems are considerable just to get cross-country measures which are similar but this may well happen when you have ranking capacity on the basis of student outcomes, probably becomes more powerful than research performance in some ways; at least in terms of the international market. I mean research performance probably distinguishes universities from institutions and gives them prestige but teaching outcomes are also important. Once you can measure and establish comparability across countries and measure teaching outcomes that way, then it could be a new world.

End

Reactions to the ranking of universities: is Malaysia over-reacting?

thesqscover.jpgI have had a chance to undertake a quick survey among colleagues in other countries regarding reactions to the UK’s Times Higher World University Rankings 2007 in their respective countries.

A colleague in the UK noted that as one might expect from the home of one of the more notorious world rankings, and a higher education system obsessed with reputation, ‘league tables’ are much discussed in the UK. The UK government, specifically, the Higher Education Funding Council for England (HEFCE), as noted last week, has commissioned a major research into five ranking systems and their impact on higher education institutions in England. In other words, the UK government is very concerned with the whole business of ranking of universities, for the reputation of the UK as a global centre for higher education is at stake.

Another colleague reported that, among academics in the UK, that the reaction to the Times Higher rankings varies widely. Many people working in higher education are deeply sceptical and cynical about the value of such league tables, about their value, purpose and especially methodology. For the majority of UK universities that do not appear in the tables and are probably never likely to appear, the tables are of very little significance. However, for the main research-led universities they are a source of growing interest. These are the universities that see themselves as competing on the world stage. Whilst they will often criticise the methodologies in detail, they will still study the results very carefully and will certainly use good results for publicity and marketing. Several leading UK universities (e.g., Warwick) now have explicit targets, for example, to be in the top 25 or 50 by a particular year, and are developing strategies with this in mind. However, it is reported that most UK students pay little attention to the international tables, but universities are aware that rankings can have a significant impact on recruitment of international students.

In Hong Kong, the Times Higher rankings has been seriously discussed in both the media and by university presidents (some of whom received higher rankings this year, thus making it easier to request increased funding from government based on their success). Among scholars/academics, especially those familiar with the various university ranking systems (the Times Higher rankings and others, like the Shanghai Jiaotong University rankings), there is some scepticism, especially concerning the criteria used.

Rankings are a continuous source of debate in the Australian system, no doubt as a result of Australia’s strong focus on the international market. Both the Times Higher rankings and the recent one undertaken by the Melbourne Institute have resulted in quite strong debate, spurred by Vice Chancellors whose institutions do not score in the top.

In Brazil, it is reported that ranking of universities did not attract media attention and public debate for the very reason that university rankings have had no impact on the budgetary decision of the government. The more relevant issue in the higher education agenda in Brazil is social inclusion, thus public universities are rewarded by their plans for extending access to their undergraduate programs, especially if it includes large number of students per faculty. Being able to attract foreign students is secondary in nature to many universities. Thus, public universities have had and continue to have assured access to budget streams that reflects the Government’s historical level of commitment.

A colleague in France noted that the manner Malaysia, especially the Malaysian Cabinet of Ministers and the Parliament, reacted to Times Higher rankings is relatively harsh. It appears that, in the specific case of Malaysia, the ranking outcome is being used by politicians to ‘flog’ senior officials governing higher education systems and/or universities. And yet critiques of such ranking schemes and their methodologies (e.g., via numerous discussions in Malaysia, or via the OECD or University Ranking Watch) go unnoticed. Malaysia better watch out, as the world is indeed watching us.

Morshidi Sirat

University rankings: deliberations and future directions

I attended a conference (the Worldwide Universities Network-organised Realizing the Global University, with a small pre-event workshop) and an Academic Cooperation Association-organised workshop (Partners and Competitors: Analysing the EU-US Higher Education Relationship) last week. Both events were well run and fascinating. I’ll be highlighting some key themes and debates that emerged in them throughout several entries in GlobalHigherEd over the next two weeks.

top10cites.jpg

One theme that garnered a significant amount of attention in both places was the ranking of universities (e.g., see one table here from the recent Times Higher Education Supplement-QS ranking that was published a few weeks ago). In both London and Brussels stakeholders of all sorts spoke out, in mainly negative tones, about the impacts of ranking schemes. They highlighted all of the usual critiques that have emerged over the last several years; critiques that are captured in the:

Suffice it to say everyone is “troubled by” (“detests”, “rejects”, “can’t stand”, “loathes”, “abhors”, etc…) ranking schemes but at the same time the schemes are used when seen fit, which usually means by relatively highly ranked institutions and systems to legitimize their standing in the global higher ed world, or to (e.g., see the case of Malaysia) flog the politicians and senior officials governing higher education systems and/or universities.

If ranking schemes are here to stay, as they seem to be (despite the Vice-Chancellor of the University of Bristol emphasizing in London that “we only have ourselves to blame”), four themes emerged as to where the global higher ed world might be heading with respect to rankings:

iheprankings.jpg(1) Critique and reformulation. If rankings schemes are here to stay, as credit ratings agencies’ (e.g., Standard & Poor’s) products also are, then the schemes need to be more effectively and forcefully critiqued with an eye to the reformulation of existing methodologies. The Higher Education Funding Council for England (HEFCE), for example, is conducting research on ranking schemes, with a large report due to be released in February 2008. This comes on the back of the Institute for Higher Education Policy’s large “multi-year project to examine the ways in which college and university ranking systems influence decision making at the institutional and government policy levels”, and a multi-phase study by the OECD on the impact of rankings in select countries (see Phase I results here). On a related note, the three-year old Faculty Scholarly Productivity Index is continually being developed in response to critiques, though I also know of many faculty and administrators who think it is beyond repair.

(2) Extending the power and focus of rankings. This week’s Economist notes that the OECD is developing a plan for a January 2008 meeting of member education ministers where they will seek approval to “[L]ook at the end result—how much knowledge is really being imparted”. What this means, in the words of Andreas Schleicher, the OECD’s head of education research, is that rather “than assuming that because a university spends more it must be better, or using other proxy measures for quality, we will look at learning outcomes”. The article notes that the first rankings should be out in 2010, and that:

[t]he task the OECD has set itself is formidable. In many subjects, such as literature and history, the syllabus varies hugely from one country, and even one campus, to another. But OECD researchers think that problem can be overcome by concentrating on the transferable skills that employers value, such as critical thinking and analysis, and testing subject knowledge only in fields like economics and engineering, with a big common core.

Moreover, says Mr Schleicher, it is a job worth doing. Today’s rankings, he believes, do not help governments assess whether they get a return on the money they give universities to teach their undergraduates. Students overlook second-rank institutions in favour of big names, even though the less grand may be better at teaching. Worst of all, ranking by reputation allows famous places to coast along, while making life hard for feisty upstarts. “We will not be reflecting a university’s history,” says Mr Schleicher, “but asking: what is a global employer looking for?” A fair question, even if not every single student’s destiny is to work for a multinational firm.

Leaving aside the complexities and politics of this initiative, the OECD is, yet again, setting the agenda for the global higher ed world.

apollofinancials.jpg(3) Blissful ignorance. The WUN event had a variety of speakers from the private for profit world, including Jorge Klor de Alva, Senior Vice-President, Academic Excellence and Director of University of Phoenix National Resource Center. The University of Phoenix, for those of you who don’t know, is part of the Apollo Group, has over 250,000 students, and is highly profitable with global ambitions. I attended a variety of sessions where people like Klor de Alva spoke and they could really care less about ranking schemes for their target “market” is a “non-traditional” one that tends not to matter (to date) to the rankers. Revenue, operating margin and income, and net income (e.g., see Apollo’s financials from their 2006 Annual Report to the left), and the views of Wall Street analysts (but not the esteem of the intelligentsia), are what matter instead for these type of players.

(4) Performance indicators for “ordinary” universities. Several speakers and commentators suggested that the existing ranking schemes were frustrating to observe from the perspective of universities not ‘on the map’, especially if they would realistically never get on the map. Alternative schemes were discussed, including performance indicators that reflected the capacity of universities to meet local and regionally-specific needs; needs that are often ignored by highly ranked universities or the institutions developing the ranking methodologies. Thus a university could feel better or worse depending on how it does over time in the performance rankings. This perspective is akin to that put forward by Jennifer Robinson in her brilliant critique of the global cities ranking schemes that exist. Robinson’s book, Ordinary Cities: Between Modernity and Development (Routledge, 2006) is well worth reading if this is your take on ranking schemes.

The future directions that ranking schemes will take are uncertain, though what is certain is that when the OECD and major funding councils start to get involved then the politics of university rankings cannot help but heat up even more. This said presence and voice regarding the (re)shaping of schemes with distributional impacts always needs to be viewed in conjunction with attention to absence and silence.

Kris Olds