University institutional performance: HEFCE, UK universities and the media

deem11 This entry has been kindly prepared by Rosemary Deem, Professor of Sociology of Education, University of Bristol, UK. Rosemary’s expertise and research interests are in the area of higher education, managerialism, governance, globalization, and organizational cultures (student and staff).

Prior to her appointment at Bristol, Rosemary was Dean of Social Sciences at the University of Lancaster. Rosemary has served as a member of ESRC Grants Board 1999-2003, and Panel Member of the Education Research Assessment Exercise 1996, 2001, 2008.

GlobalHigherEd invited Rosemary to respond to one of the themes (understanding institutional performance) in the UK’s Higher Education Debate aired by the Department for Innovation, Universities and Skills  (DIUS) over 2008.

~~~~~~~~~~~~~~

Institutional performance of universities and their academic staff and students is a very topical issue in many countries, for potential students and their families and sponsors, governments and businesses. As well as numerous national rankings, two annual international league tables in particular, the Shanghai Jiao Tong,  developed for the Chinese government to benchmark its own universities and the commercial Times Higher top international universities listings, are the focus of much government and institutional  interest,  as  universities vie with each other to appear in the top rankings of so-called world-class universities, even though the quest for world-class status has negative as well as positive consequences for national higher education systems (see here).

International league tables often build on metrics that are themselves international (e.g publication citation indexes) or use proxies for quality such as the proportions of international students or staff/student ratios, whereas national league tables tend to develop their own criteria, as the UK Research Assessment Exercise (RAE) has done and as its planned replacement, the Research Excellence Framework is intended to do. deem2

In March 2008, John Denham, Secretary of State for (the Department of) Innovation, Universities and Skills (or DIUS) commissioned the Higher Education Funding Council for England (HEFCE) to give some advice on measuring institutional performance. Other themes  on which the Minister commissioned advice, and which will be reviewed on GlobalHigherEd over the next few months, were On-Line Higher Education Learning, Intellectual Property and research benefits; Demographic challenge facing higher education; Research Careers; Teaching and the Student Experience; Part-time studies and Higher Education; Academia and public policy making; and International issues in Higher Education.

Denham identified five policy areas for the report on ‘measuring institutional performance’ that is the concern of this entry, namely: research, enabling business to innovate and engagement in knowledge transfer activity, high quality teaching, improving work force skills and widening participation.

This list could be seen as a predictable one since it relates to current UK government policies on universities and strongly emphasizes the role of higher education in producing employable graduates and relating its research and teaching to business and the ‘knowledge economy’.

Additionally, HEFCE already has quality and success measures and also surveys, such as the National Student Survey of all final year undergraduates for everything except workforce development.  The five areas are a powerful indicator of what government thinks the purposes of universities are, which is part of a much wider debate (see here and here).

On the other hand, the list is interesting for what it leaves out – higher education institutions and their local communities (which is not just about servicing business), or universities’ provision for supporting the learning of their own staff (since they are major employers in their localities) or the relationship between teaching and research

The report makes clear that HEFCE wants to “add value whilst minimising the unintended consequences”, (p. 2), would like to introduce a code of practice for the use of performance measures and does not want to introduce more official league tables in the five policy areas.  There is also a discussion about why performance is measured: it may be for funding purposes, to evaluate new policies, inform universities so they can make decisions about their strategic direction, improve performance or to inform the operation of markets. The disadvantages of performance measures, the tendency for some measures to be proxies (which will be a significant issue if plans to use metrics and bibliometrics  as proxies for research quality in  the new Research Excellence Framework are adopted) and the tendency to measure activity and volume but not impact are also considered in the report.

However, what is not emphasized enough are that the consequences once a performance measure is made public are not within anyone’s control.  Both the internet and the media ensure that this is a significant challenge.  It is no good saying that “Newspaper league tables do not provide an accurate picture of the higher education sector” (p 7) but then taking action which invalidates this point.

Thus in the RAE 2008, detailed cross-institutional results were made available by HEFCE to the media before they are available to the universities themselves last week, just so that newspaper league tables can be constructed.

Now isn’t this an example of the tail wagging the dog, and being helped by HEFCE to do so? Furthermore, market and policy incentives may conflict with each other.  If an institution’s student market is led by middle-class students with excellent exam grades, then urging them to engage in widening participation can fall on deaf ears.   Also, whilst UK universities are still in receipt of significant public funding, many also generate substantial private funding too and some institutional heads are increasingly irritated by tight government controls over what they do and how they do it.

Two other significant issues are considered in the report. One is value-added measures, which HEFCE feels it is not yet ready to pronounce on.  Constructing these for schools has been controversial and the question of over what period should value added measures be collected is problematic, since HEFCE measures would look only at what is added to recent graduates, not what happens to them over the life course as a whole.

The other issue is about whether understanding and measuring different dimensions of institutional performance could help to support diversity in the sector.  It is not clear how this would work for the following three reasons:

  1. Institutions will tend to do what they think is valued and has money attached, so if the quality of research is more highly valued and better funded than quality of teaching, then every institution will want to do research.
  2. University missions and ‘brands’ are driven by a whole multitude of factors and importantly by articulating the values and visions of staff and students and possibly very little by ‘performance’ measures; they are often appealing to an international as well as a national audience and perfect markets with detailed reliable consumer knowledge do not exist in higher education.
  3. As the HEFCE report points out, there is a complex relationship between research, knowledge transfer, teaching, CPD and workforce development in terms of economic impact (and surely social and cultural impact too?). Given that this is the case, it is not evident that encouraging HEIs to focus on only one or two policy areas would be helpful.

There is a suggestion in the report that web-based spidergrams based on an seemingly agreed (set of performance indicators might be developed which would allow users to drill down into more detail if they wished). Whilst this might well be useful, it will not replace or address the media’s current dominance in compiling league tables based on a whole variety of official and unofficial performance measures and proxies. Nor will it really address the ways in which the “high value of the UK higher education ‘brand’ nationally and internationally” is sustained.

Internationally, the web and word of mouth are more critical than what now look like rather old-fashioned performance measures and indicators.  In addition, the economic downturn and the state of the UK’s economy and sterling are likely to be far more influential in this than anything HEFCE does about institutional performance.

The report, whilst making some important points, is essentially introspective, fails to sufficiently grasp how some of its own measures and activities are distorted by the media, does not really engage with the kinds of new technologies students and potential students are now using (mobile devices, blogs, wikis, social networking sites, etc) and focuses far more on national understandings of institutional performance than on how to improve the global impact and understanding of UK higher education.

Rosemary Deem

European ambitions: towards a ‘multi-dimensional global university ranking’

Further to our recent entries on European reactions and activities in relationship to global rankings schemes:

and a forthcoming guest contribution to SHIFTmag: Europe Talks to Brussels, ranking(s) watchers should examine this new tender for a €1,100,000 (maximum) contract for the ‘Design and testing the feasibility of a Multi-dimensional Global University Ranking’, to be completed by 2011.

dgecThe Terms of Reference, which hs been issued by the European Commission, Directorate-General for Education and Culture, is particularly insightful, while this summary conveys the broad objectives of the initiative:

The new ranking to be designed and tested would aim to make it possible to compare and benchmark similar institutions within and outside the EU, both at the level of the institution as a whole and focusing on different study fields. This would help institutions to better position themselves and improve their development strategies, quality and performances. Accessible, transparent and comparable information will make it easier for stakeholders and, in particular, students to make informed choices between the different institutions and their programmes. Many existing rankings do not fulfil this purpose because they only focus on certain aspects of research and on entire institutions, rather than on individual programmes and disciplines. The project will cover all types of universities and other higher education institutions as well as research institutes.

The funding is derived out of the Lifelong Learning policy and program stream of the Commission.

Thus we see a shift, in Europe, towards the implementation of an alternative scheme to the two main global ranking schemes, supported by substantial state resources at a regional level. It will be interesting to see how this eventual scheme complements and/or overturns the other global ranking schemes that are products of media outlets, private firms, and Chinese universities.

Kris Olds

International university rankings, classifications & mappings – a view from the European University Association

Source: European University Association Newsletter, No. 20, 5 December 2008.

Note: also see ‘Multi-scalar governance technologies vs recurring revenue: the dual logics of the rankings phenomenon

Multi-scalar governance technologies vs recurring revenue: the dual logics of the rankings phenomenon

Our most recent entry (‘University Systems Ranking (USR)’: an alternative ranking framework from EU think-tank‘) is getting heavy traffic these days, a sign that the rankings phenomenon just won’t go away.  Indeed there is every sign that debates about rankings will be heating up over the next 1-2 year in particular, courtesy of the desire of stakeholders to better understand rankings, generate ‘recurring revenue’ off of rankings, and provide new governance technologies to restructure higher education and research systems.

This said I continue to be struck, as I travel to selective parts of the world for work, by the diversity of scalar emphases at play.

eiffeleu1In France, for example, the broad discourse about rankings elevates the importance of the national (i.e., French) and regional (i.e., European) scales, and only then does the university scale (which I will refer to as the institutional scale in this entry) come into play in importance terms. This situation reflects the strong role of the national state in governing and funding France’s higher education system, and France’s role in European development debates (including, at the moment, presidency of the Council of the European Union).

In UK it is the disciplinary/field and then the institutional scales that matter most, with the institutional made up of a long list of ranked disciplines/fields. Once the new Research Assessment Exercise (RAE) comes out in late 2008 we will see the institutional assess the position of each of their disciplines/fields, which will then lead to more support or relatively rapid allocation of the hatchet at the disciplinary/field level. This is in part because much national government funding (via the Higher Education Funding Council for England (HEFCE), the Scottish Funding Council (SFC), the Higher Education Funding Council for Wales (HEFCW) and the Department for Employment and Learning, Northern Ireland (DEL)) to each university is structurally dependent upon the relative rankings of each university’s position in the RAE, which is the aggregate effect of the position of the array of fields/disciplines in any one university (see this list from the University of Manchester for an example). The UK is, of course, concerned about its relative place in the two main global ranking schemes, but it doing well at the moment so the scale of concern is of a lower order than most other countries (including all other European countries). Credit rating agencies also assess and factor in rankings with respect to UK universities (e.g. see ‘Passing judgment’: the role of credit rating agencies in the global governance of UK universities‘).

In the US – supposedly the most marketized of contexts – there is highly variably concern with rankings.  Disciplines/fields ranked by media outlets like U.S. News & World Report are concerned, to be sure, but U.S. News & World Report does not allocate funding. Even the National Research Council (NRC) rankings matter less in the USA given that its effects (assuming it eventually comes out following multiple delays) are more diffuse. The NRC rankings are taken note of by deans and other senior administrators, and also faculty, albeit selectively. Again, there is no higher education system in the US – there are systems. I’ve worked in Singapore, England and the US as a faculty member and the US is by far the least addled or concerned by ranking systems, for good and for bad.

While the diversity of ranking dispositions at the national and institutional levels is heterogeneous in nature, the global rankings landscape is continuing to change, and quickly. In the remainder of this entry we’ll profile but two dimensions of the changes.

Anglo-American media networks and recurrent revenue

ustheFirst, new key media networks, largely Anglo-American private sector networks, have become intertwined.  As Inside Higher Ed put it on 24 November:

U.S. News & World Report on Friday announced a new, worldwide set of university rankings — which is really a repackaging of the international rankings produced this year in the Times Higher Education-QS World University Rankings. In some cases, U.S. News is arranging the rankings in different ways, but Robert Morse, director of rankings at the magazine, said that all data and the methodology were straight from the Times Higher’s rankings project, which is affiliated with the British publication about higher education. Asked if his magazine was just paying for reprint rights, Morse declined to discuss financial arrangements. But he said that it made sense for the magazine to look beyond the United States. “There is worldwide competition for the best faculty, best students and best research grants and researchers,” he said. He also said that, in the future, U.S. News may be involved in the methodology. Lloyd Thacker, founder of the Education Conservancy and a leading critic of U.S. News rankings, said of the magazine’s latest project: “The expansion of a business model that has profited at the expense of education is not surprising. This could challenge leaders to distinguish American higher education by providing better indicators of quality and by helping us think beyond ranking.”

This is an unexpected initiative, in some ways, given that the Times Higher Education-QS World University Rankings are already available on line and US New and World Report is simply repackaging these for sale in the American market. Yet if you adopt a market-making perspective this joint venture makes perfect sense. Annual versions of the Times Higher Education-QS World University Rankings will be reprinted in a familiar (to US readers) format, thereby enabling London-based TSL Education Ltd., London/Paris/Singapore-based QS Quacquarelli Symonds, and Washington DC-based U.S. News and World Report to generate recurring revenue with little new effort (apart from repackaging and distribution in the US). The enabling mechanism is, in this case, reprint rights fees. As we have noted before, this is a niche industry in formation, indeed.

More European angst and action

And second, at the regional level, European angst (an issue we profiled on 6 July in ‘Euro angsts, insights and actions regarding global university ranking schemes‘) about the nature and impact of rankings is leading to the production of critical reports on rankings methodologies, the sponsorship of high powered multi-stakeholder workshops, and the emergence of new proposals for European ranking schemes.

ecjrccoverSee, for example, this newly released report on rankings titled Higher Education Rankings: Robustness Issues and Critical Assessment, which is published by the European Commission Joint Research Centre, Institute for the Protection and Security of the Citizen, Centre for Research on Lifelong Learning (CRELL)

The press release is here, and a detailed abstract of the report is below:

The Academic Ranking of World Universities carried out annually by the Shanghai’s Jiao Tong University (mostly known as the ‘Shanghai ranking’) has become, beyond the intention of its developers, a reference for scholars and policy makers in the field of higher education. For example Aghion and co-workers at the Bruegel think tank use the index – together with other data collected by Bruegel researchers – for analysis of how to reform Europe’s universities, while French President Sarkozy has stressed the need for French universities to consolidate in order to promote their ranking under Jiao Tong. Given the political importance of this field the preparation of a new university ranking system is being considered by the French ministry of education.

The questions addressed in the present analysis is whether the Jiao Tong ranking serves the purposes it is used for, and whether its immediate European alternative, the British THES, can do better.

Robustness analysis of the Jiao Tong and THES ranking carried out by JRC researchers, and of an ad hoc created Jiao Tong-THES hybrid, shows that both measures fail when it comes to assessing Europe’s universities. Jiao Tong is only robust in the identification of the top performers, on either side of the Atlantic, but quite unreliable on the ordering of all other institutes. Furthermore Jiao Tong focuses only on the research performance of universities, and hence is based on the strong assumption that research is a universal proxy for education. THES is a step in the right direction in that it includes some measure of education quality, but is otherwise fragile in its ranking, undeniably biased towards British institutes and somehow inconsistent in the relation between subjective variables (from surveys) and objective data (e.g. citations).

JRC analysis is based on 88 universities for which both the THES and Jiao Tong rank were available. European universities covered by the present study thus constitute only about 0.5% of the population of Europe’s universities. Yet the fact that we are unable to reliably rank even the best European universities (apart from the 5 at the top) is a strong call for a better system, whose need is made acute by today’s policy focus on the reform of higher education. For most European students, teachers or researchers not even the Shanghai ranking – taken at face value and leaving aside the reservations raised in the present study – would tell which university is best in their own country. This is a problem for Europe, committed to make its education more comparable, its students more mobile and its researchers part of a European Research Area.

Various attempts in EU countries to address the issue of assessing higher education performance are briefly reviewed in the present study, which offers elements of analysis of which measurement problem could be addressed at the EU scale. [my emphasis]

While ostensibly “European”, does it really matter that the Times Higher Education-QS World University Ranking is produced by firms with European headquarters, while the Jiao Tong ranking is produced by an institution based in China?

The divergent logics underlying the production of discourses about rankings are also clearly visible in two related statements. At the bottom of the European Commission’s Joint Research Centre report summarized above we see “Reproduction is authorised provided the source is acknowledged”, while the Times Higher Education-QS World University Rankings, a market-making discourse, is accompanied by a lengthy copyright warning that can be viewed here.

Yet do not, for a minute, think that ‘Europe’ does not want to be ranked, or use rankings, as much if not more than any Asian or American or Australian institution. At a disciplinary/field level, for example, debates are quickly unfolding about the European Reference Index for the Humanities (ERIH), a European Science Foundation (ESF) backed initiative that has its origins in deliberations about the role of the humanities in the European Research Area. The ESF frames it this way:

Humanities research in Europe is multifaceted and rich in lively national, linguistic and intellectual traditions. Much of Europe’s Humanities scholarship is known to be first rate. However, there are specifities of Humanities research, that can make it difficult to assess and compare with other sciences. Also,  it is not possible to accurately apply to the Humanities assessment tools used to evaluate other types of research. As the transnational mobility of researchers continues to increase, so too does the transdisciplinarity of contemporary science. Humanities researchers must position themselves in changing international contexts and need a tool that offers benchmarking. This is why ERIH (European Reference Index for the Humanities) aims initially to identify, and gain more visibility for top-quality European Humanities research published in academic journals in, potentially, all European languages. It is a fully peer-reviewed, Europe-wide process, in which 15 expert panels sift and aggregate input received from funding agencies, subject associations and specialist research centres across the continent. In addition to being a reference index of the top journals in 15 areas of the Humanities, across the continent and beyond, it is intended that ERIH will be extended to include book-form publications and non-traditional formats. It is also intended that ERIH will form the backbone of a fully-fledged research information system for the Humanities.

See here for a defense of this ranking system by Michael Worton (Vice-Provost, University College London, and a member of the ERIH steering committee).  I was particularly struck by this comment:

However, the aim of the ERIH is not to assess the quality of individual outputs but to assess dissemination and impact. It can therefore provide something that the RAE cannot: it can be used for aggregate benchmarking of national research systems to determine the international standing of research carried out in a particular discipline in a particular country.

Link here for a Google weblog search on this debate, while a recent Chronicle of Higher Education article (‘New Ratings of Humanities Journals Do More Than Rank — They Rankle’) is also worth reviewing.

Thus we see a new rankings initiative emerging to enable (in theory) Europe to better codify its highly developed humanities presence on the global research landscape, but in a way that will enable national (at the intra-European scale) peaks (and presumably) valleys of quality output to be mapped for the humanities, but also for specific disciplines/fields. Imagine the governance opportunities available, at multiple scales, if this scheme is operationalized.

And finally, at the European scale again, University World News noted, on 23 November, that:

The European Union is planning to launch its own international higher education rankings, with emphasis on helping students make informed choices about where to study and encouraging their mobility. Odile Quintin, the European Commission’s Director-General of Education and Culture, announced she would call for proposals before the end of the year, with the first classification appearing in 2010.

A European classification would probably be compiled along the same lines as the German Centre for Higher Education Development Excellence Ranking.

European actors are being spurred into such action by multiple forces, some internal (including the perceived need to ‘modernize European universities in the context of Lisbon and the European Research Area), some external (Shanghai Jiao Tong; Times Higher QS), and some of a global dimension (e.g., audit culture; competition for mobile students).

eurankingsprogThis latest push is also due to the French presidency of the Council of the European Union, as noted above, which is facilitating action at the regional and national scales. See, for example, details on a Paris-based conference titled ‘International comparison of education systems: a european model?’ which was held on 13-14 November 2008. As noted in the programme, the:

objective of the conference is to bring to the fore the strengths and weaknesses of the different international and European education systems, while highlighting the need for regular and objective assessment of the reforms undertaken by European Member States by means of appropriate indicators. It will notably assist in taking stock of:
– the current state and performance of the different European education systems:
– the ability of the different European education systems to curb the rate of failure in schools,
– the relative effectiveness of amounts spent on education by the different Member States.

The programme and list of speakers is worth perusing to acquire a sense of the broad agenda being put forward.

Multi-scalar governance vs (?) recurring revenue: the emerging dual logics of the rankings phenomenon

The rankings phenomenon is here to stay. But which logics will prevail, or at least emerge as the most important in shaping the extension of audit culture into the spheres of higher education and research?  At the moment it appears that the two main logics are:

  • Creating a new niche industry to form markets and generate recurrent revenue; and,
  • Creating new multi-scalar governance technologies to open up previously opaque higher education and research systems, so as to facilitate strategic restructuring for the knowledge economy.

These dual logics are in some ways contradictory, yet in other ways they are interdependent. This is a phenomenon that also has deep roots in the emerging centres of global higher ed and research calculation that are situated in London, Shanghai, New York, Brussels, and Washington DC.  And it is underpinned by the analytical cum revenue generating technologies provided by the Scientific division of Thomson Reuters, which develops and operates the ISI Web of Knowledge.

Market-making and governance enabling…and all unfolding before our very eyes. Yet do we really know enough about the nature of the unfolding process, including the present and absent voices, that seems to be bringing these logics to the fore?

Kris Olds

Investing wisely for Australia’s future

Editor’s note: The following speech was given by Professor Ian Chubb, Vice-Chancellor of The Australian National University (ANU) on Wednesday 29 October 2008 at the National Press Club of Australia. It is reprinted in GlobalHigherEd with his kind permission.

~~~~~~~~~~~~~

Thank you Ken – for your welcome and introduction.

It has been some years since I last spoke at the National Press Club, and I appreciate the opportunity to do so again – particularly at the time when education reviews and research enquiries are being finalised and Government responses being prepared.

It is an important time; and there are opportunities not to be missed – and I plan to raise some of those with you today.

I suppose, before I start, I should make three things clear:

  1. I support the push for better funding for universities – accompanied by both reform and with selectively allocated additional funds for particular purposes based largely on the quality of the work we do – where we do it;
  2. I support the directions being pursued by the Government – and look forward to the outcomes of their various deliberations.
  3. I remind you that I am from ANU, that I work for ANU and that I serve ANU.  I like to think that through that role, however, I can also serve a bigger one – particular and important aspects of the national interest.

We at ANU do make a contribution to Australia and beyond.  For a start, we educate excellent students very well indeed; we rate in the top ‘A1’ band of the Federal Minister’s Teaching and Learning Performance Fund across our teaching profile – and have done so in the two years the category has been identified.   This was a surprise to some in the higher education sector, where we cherish the notion of a teaching-research nexus.  Notwithstanding the mantra, some had anticipated that better teaching would be done in the places where there was less research – more to spend on it perhaps, more of a focus, and so on.  It was presumed that the research-intensive universities would probably see teaching as a chore. But the research-intensive universities are places where staff and students alike are learners and where all of them are just downright curious to discover what they don’t know.  And at its best, they do it together.

At ANU we continue to work to improve what we do and how we do it. We set ourselves and our students stretch targets. We aim for high standards – not throughput. And we aim to give our students a head start in their life after University.

In research we do well.  We review what we do, we rate what we do, and we manage what we do in a way that some would call relentless, though few could argue is stifling.  So I am proud that the ANU continues to lead among Australian universities in the various world rankings that are, necessarily, based largely on research performance.

We are placed at 59 in the Shanghai Jiao Tong’s most recent listings and 16th on the list produced by the UK Times Higher Education Supplement, a position we have maintained now over three consecutive years.

I am proud because the rankings reflect well on the efforts of talented people. It is useful and it is reinforcing for that reason, and possibly more usefully it tells you about your neighbourhood when the world’s universities are rated along with you using a particular set of indicators.

I am not at all boastful, however, because we all know that such rankings are constructed arbitrarily around some of the available comparable measures year on year.  That they are called ‘prestigious’ or ‘highly regarded’ is largely because they are published each year, and because we have nothing else.  They are represented as ‘qualitative’ when in fact they are only partly about quality.

This is one reason why I support the Federal Government’s Excellence in Research for Australia (ERA) proposal, because, handled well, it will provide us with an efficacious and internationally benchmarked model to judge research quality in Australia.  Then we should have something truly useful to talk about.

But let me now talk about something usefully indicative that can be drawn from the Shanghai Jiao Tong (SJT) world university rankings: the neighbourhood.

When the rankings first came out five years ago, there were seven Chinese universities in the top 500.  This year there are eighteen.  It is quite possible that in five years’ time, given both the rate and the selectivity of their additional investment, there will be 10 or so Chinese universities in the top 200 and several in the top 100.  Australia may well struggle to keep our current three (ANU, Melbourne and Sydney) in that league.  Rankings are about relative performance and positions can change because of bigger changes in the performance of others and not because your own has slipped – or it could even be the way in which institutions present their data rather than any actual change in performance.  But the outcomes send signals that can be read.

Does this all matter?  Well I think it does, but it is not the only thing that matters.

When you look at the countries that have more than one university rated in the top 100 on the SJT ranking in 2007 you can see that they are countries with a high GDP per capita.

The United States stands out because of its scale.  The United Kingdom holds par when adjusted for population size.  Australia and Canada have been lifting above their weight, but Canada is now waxing while Australia is waning in every disciplinary field.  Asian universities are rising. European universities now realise that they are being left behind – but have started asking the right questions.

But history tells us that if you’re not able to pull your weight and to be a contributor you risk being locked out as a discipline, or as a university, or as a nation. If we don’t match investment and don’t match performance, Australia could be back to somewhere near where we were before 1946 – on the outside looking in.

In a world thirsty for talent and the benefits derived from the application of that talent, strategies are changing.  Many countries are ramping up their investments in their leading universities.  They are selectively concentrating additional funding for research, for research infrastructure, research centres of excellence and international research collaborations.  They are increasing the number of professors, and developing career opportunities for post-doctoral staff while enlarging doctoral student enrolments, including international PhD enrolments.

Take the example of Canada, which has set itself a goal of ranking amongst the top 4 countries in the world in terms of R&D performance across the government, business and higher education sectors. It set a target in 2002 of increasing the admission of Masters and PhD students at Canadian universities by 5% per year through to 2010. It is providing $300 million annually to support the salary costs of 2000 research professors in Canadian universities and hospitals, seeking, in their own words, ‘to unleash the full research potential’ of these institutions by attracting the best talent in the world – and they are doing so selectively. Close to two thirds of the Chairs have been allocated to the 13 most research intensive universities amongst their nearly 70 universities and roughly equal number of Colleges. Just last week the Canadian Finance Minister commented that they must build on  ‘our knowledge advantage’ and that: “This is a critical time in Canada in terms of making sure that in our public-policy decisions that we support universities and colleges.”

Germany.  Germany has invested heavily in research and innovation, particularly in the university sector, aiming, in their own words, “to establish internationally visible research beacons in Germany.” Their strategy includes spending up to 195 million Euros each year to establish internationally visible research and training Clusters of Excellence, based at universities but collaborating with non-university institutions. Closely tied to this is an effort to the tune of 210 million Euros each year to heighten the profile and strength of ten selected universities to be amongst world’s best in their areas of demonstrable excellence.

China.  China is the world’s fasted growing supporter of research and development with its national R&D funding now third highest in the world, just behind the United States and Japan. In 1998 China instituted the 985 Project, under which its ten leading universities were given special grants in excess of US$ 124 million over three years with the explicit aim of ensuring that China had universities represented amongst the world’s best. They have now enlarged the program to cover 36 universities amongst their hundreds.

Australia has still to move – and we have choices to make.  We see what is happening elsewhere: we see mostly additional funding concentrated and selectively allocated – not overtly, at least, at the expense of core funding; we see the benefits of granting universities adequate, strategic but accountable funding (like we once had); we see overt attempts to ‘internationalise’ by drawing staff and students from the global talent pool.  There is more…and so there are many lessons to be absorbed by us – an important one is to resist the temptation to spread additional resources – it would be close to a uniquely Australian approach.

And this in a world that won’t care unless we earn the right to be at their table; it is as true for our university leaders, our staff and our students, as it is for our political or our business leaders.   As I said earlier – if we are not at the table we would be back to something like the position we were just after the Second World War.

Our approach must be different from now.  We do need reform and we don’t need more tinkering. We don’t need more money tied up in small, so-called competitive programs that only partially fund what they purport to support and are not conducive to long-term planning.

I support a policy that will help universities be different from each other and to be outstanding at what they do.  I support policy-driven differentiation, not drift, and I support additional funding allocations above a better base related to both purpose and quality of work multiplied by the quantity of work.

I do not think that there is only one right way forward. And I would be happy to see us move on from an outdated ‘one-size fits all’ approach with side deals.  But not if it were replaced by the same sort of blunt and clumsy instrument.

But while there might not be one right way, I do know the wrong way: continuing the chronic partial funding of nearly everything we do.  In fact, we presently cope with a lot that is chronic. Chronic under-investment. Chronic tinkering rather than real reform. Chronic suspicion rather than trust. Chronic erosion of capital and infrastructure rather than provision of the best possible resources to enable our most talented to do their best work here. Chronic lack of support for students who are seen as a cost rather than a  means by which Australia invests in its future.  A chronic under-valuing of staff rather than recognising that the world call on talent means temptations are rife elsewhere. And a chronic under-valuing of PhD work and scholarships rather than using the peak to build capacity. The story rolls on.

For the universities of Australia to serve Australia’s interests, we need to be different from each other, respected for what we do, and be supported for what we do well over and above core support.  And we need the policy framework to make it happen, knowing that a consequence will be differential funding as different activities cost differently.

As a start we need to articulate what universities are for, what different purposes they may serve, and how.

In a recent essay, ‘What are universities for?‘, Boulton & Lucas (2008) suggest that the enduring role of universities is to create new opportunities and promote learning that enables deep thinking beyond the superficial, handles complexity and ambiguity, and shapes the future.  They argue that it is important not to be beguiled by prospects of immediate payoff from investment in university research and teaching.

I have a duty of care in my position to help build the University’s capacity to respond to such challenges in ways that are consistent with our envisaged role.  It is my responsibility, primarily, to ensure that the people with scholarly expertise in the University have the room and resources to excel in their research, the opportunity through teaching to share their first-hand insights with their students (I note that Paul Krugman, the most recent winner of the Nobel Prize for his research in Economics, said when he began his thanks: you have to start with your teachers), and the freedom to speak out when they see the need to do so, and to put their expertise into the public arena to help inform public opinion.

Let me indicate the ways by which universities can contribute, and then suggest some options for public policy.

I work from the premise that the ability of a university to deliver its mission depends crucially on public support, appropriate regulatory frameworks and adequate funding.  Without the requisite public trust and support universities cannot survive in today’s world.

Interestingly, the available evidence from surveys of community attitudes suggests that when it comes to major social, environmental and economic issues, the public and the Government look to universities for analysis, understanding and solutions.

Some of the current areas are well known: economic uncertainty, climate change, the threat of pandemics, sources of terrorism and the potential of alternative energy, just to name a few.

One of the ways ANU engages with the broader Australian community and seeks to understand what we Australians think is via ANUpoll.

The ANUpoll, led by Professor Ian McAllister in our College of Arts and Social Sciences, differs from other opinion polls by placing public opinion in a broad policy context, and by benchmarking Australian against international opinion. It can also reveal trends in opinions over many decades, drawing on the wide range of public opinion polls conducted at ANU since the 1960s.

It tells us interesting things about Australians. The first Poll, released in April this year, revealed that Australians, by international standards, are much more positively disposed to high levels of government expenditure, particularly on health, education, the environment and police. The Poll tells us that there is a greater level of trust in government in Australia relative to other nations.

The second poll, released in September, sought the views of Australians on higher education. It found that Australians are concerned about fair and equitable access to our universities; they view university as one important way of improving the job prospects of their children, but not the only avenue to success; and they believe that the decline in public funding for universities has gone too far.

And we know from the  ANUpoll released today that concern about climate change is a big issue for the community. Global warming is perceived as a major long-term threat to the health of the planet by a large proportion of the population.  But there is no simple solution to this problem. It is one that crosses the boundaries of science, social sciences, health, economics, law, philosophy and more. It is a big challenge; and Universities have a key role to play in meeting it.

It is no coincidence that the Australian Government and state and territory governments turned to a respected academic to investigate the impact of climate change on Australia, and to propose potential solutions.  Professor Ross Garnaut in turn drew upon the work of many of his colleagues at ANU and other universities, for the latest data, for research, thinking and ideas to respond to what he identified as a ‘diabolical problem.’

Although from one discipline, Economics, Professor Garnaut’s report reflects the reality that at the heart of the climate change challenge is the need for a deep comprehension of interlaced, inseparable elements in highly complex systems. Perhaps no challenge facing us demands such an interdisciplinary approach. It is a challenge that the community expects universities to help to meet, and one that universities must help meet.

ANU is seeking to respond to that challenge with the formation of the ANU Climate Change Institute under the leadership of Professor Will Steffen.  This initiative represents a substantial effort by the University community to harness expertise across disciplines to extend knowledge about climate change – its drivers, its implications, the scope for positive responses to its impact, and possible correctives to its trajectory.

It will develop interdisciplinary research projects on climate change through the application of the University’s core capabilities around critical questions and issues.

It will develop high quality education programs aimed at meeting the national and international demand for qualified practitioners.  From 2009 ANU will offer an interdisciplinary Masters in Climate Change offered jointly between the Fenner School of Environment and Society and the Crawford School of Economics and Government. We believe it is the first of its kind in Australia.

The Climate Change institute will also engage globally, co hosting the International Alliance of Research Universities Copenhagen Climate Change Congress March 2009, and engaging with the Intergovernmental Panel on Climate Change (IPCC), the World Climate Research Programme (WCRP), and the International Geosphere-Biosphere Programme (IGBP) among others.

ANU is seeking to respond to the expectations of the Australian community and government that the national university seek to find solutions to the complex problems that confront us.  The reality is that the world’s problems are inter-connected, and universities need organisational flexibility to respond creatively to the need for new knowledge in addressing them.

While the world faces the ticking time bomb of climate change, and universities here and around the world seek new ways to address such complex problems, another time bomb is ticking for universities – Australia’s changing demography.

The Group of Eight, has released today a backgrounder on the challenge of population change. It estimates that at least one third of the annual number of Australian PhD graduates will be needed each year on average over the next decade merely to replace retirements from the academic workforce.  Currently three quarters of doctoral graduates flow into non-academic occupations, so without additional output we would see either a slowdown of doctoral supply to the broader labour market – at a time when the country is seeking to increase the capacity of the private and public sectors to absorb new knowledge – or a shortfall in academic positions, and this is without factoring in any increase in the number of institutions to meet growth in future demand for tertiary education.

It was therefore pleasing to see the interim Report of the House of Representatives Standing Committee on Industry, Science and Innovation on Research Training in Australia.  The committee is convinced, as are we, that there is a strong case for reform – and importantly, recommendations with budget implications have bi-partisan support.

The problem is sharper for fields of research from which the pull of labour market demand is strongest – such as in engineering or geology.  We should not assume that we can meet domestic shortfall readily through immigration in the future without being prepared to pay the prices that the intensifying international competition for intellectual talent is beginning to demand.

The educational attainment of the Australian workforce is considerably below that of the world’s leaders.  Two-thirds of Australia’s workforce over 25 years of age have no post-secondary qualifications, and one third have achieved less than upper-secondary education.  Only 77% of females and 68% of males aged 19 have completed Year 12 or equivalent.

To bring Australia up to an educated workforce equivalent to the world’s leaders would involve an additional 1 million people between 25 and 44 years getting tertiary education qualifications.  To achieve that lift in the domestic skills base is challenging.  Not to do it leaves a challenge of a different kind.

Additionally, for young people aged 15 to 25, that objective would require a much higher rate of participation and would mean finding new ways of promoting access and success among potential learners who lack readiness.  For equity as well as productivity purposes it is necessary to close the achievement gap without lowering educational standards.

Taken together these rising and diversifying forms of demand for learning cannot be accommodated within the current structure of tertiary education.  Greater diversification and innovation will be needed, including new types of providers, public and private, offering flexible, multi-modal access to learning opportunities.

We should not assume this will happen naturally.  Indeed we can expect resistance to it.  New incentives will be needed to overcome structural and cultural conservatism. Another reason to move from the ‘one size fits all’ approach and rather than looking for a simple solution develop a policy framework that promotes and supports difference through design rather than drift.

Twenty years ago the Australian Government expanded higher education on a foundation of three pillars:

  • An injection of additional public investment for growth in student places and new campuses
  • The provision of income-contingent loans to enable students to make a co-contribution to the costs of higher education without up-front financial barriers
  • A redistribution of existing resources from established universities to new institutions, notably through a ‘clawback’ of research funding.

The legacy of that approach is the persistence of sameness in the funding rates for teaching, the thin spreading of funding, unvalidated claims about standards of qualifications and excellence, and a levelling down of performance peaks. It was a ‘one size fits all approach’ and it was called a unified national system. In my experience over now 23 years, it was not national, rarely unified and hardly a system.

Expansion encouraged all universities to adopt broadly similar aspirations.

We are not alone.  Boulton and Lucas made that clear to us when they discussed the European dilemma: how to have research powerhouses amongst the world’s best and provide higher education for a growing proportion of the population.  They point out that “…excessive convergence towards a single model of the basic research-focused university, with a lack of differentiated purpose, structure and mission…” has resulted in at least 980 (European) universities claiming to “aspire to achieve international excellence in research.”  In the same article, they point out that:  “The US has resolved this dilemma. Fewer than 250 universities award postgraduate degrees and fewer than 100 are recognised as research intensive, with the rest devoted to teaching and scholarship.” And remember that the U.S has thousands of post-secondary institutions.

The approach in Europe and Australia, including the funding approach, probably impeded some universities from identifying and investing in niches neglected by the established research universities.

Regardless of their varying circumstances, universities have tended to use the rhetoric of excellence, rather than ‘fitness for purpose’.  But ‘excellence’ is an empty notion when used without reference to validated performance standards.

The desire of institutions to move ‘up whichever ladder’ distracts higher education from its public purposes, skews missions, and alters institutional priorities and spending to drift beyond the limits of their capacity.

We see this very clearly in Australia where the gap between the Go8 universities and others in terms of research performance has been widening, not narrowing, despite the processes and funding of the last twenty years.

Clearly it is sub-optimal and counter-productive for the country to continue diluting the public investment in proven research capacity and performance.  We certainly cannot afford to apply this flawed thinking of the past to the future expansion and diversification of tertiary education.

An unfortunate effect of rankings like the Shanghai Jiao Tong measures that are based on internationally comparable data relating primarily to research output quality is that, in a highly competitive context, they reinforce traditional academic norms and encourage what Frans Van Vught has termed the ‘reputation race’.

He noted recently that:

The average quality of the current European higher education and research system is good but its excellence is limited.  A diversification of missions and of research portfolios and funding levels, would be necessary to allow the occurrence of more European top universities.

We could say the same about the fair to average overall quality of Australian higher education and research, while noting the view strongly held in European quarters and which resonates here, that student mobility is promoted through avoidance of stratification of universities.  It is seen to be anti-egalitarian to invest in excellence – at least in intellectual endeavours, for we don’t appear to have the same reluctance in sport. We invest additionally in the best athletes and national teams because in sport, we understand the concept of investing in excellence and that high achievement encourages the others.

Now we need a new approach to meet new challenges alongside longstanding needs to enlarge educational participation and strengthen capacity.

The three pillars on which the current system was expanded twenty years ago have become unstable. The Government share of funding has shrunk.  Market sources of finance are playing a greater role.

The problem with reliance on market forces in higher education is its tendency to reduce diversity in the system, and raise costs for students and taxpayers.  The market can be afraid of difficult or intellectually challenging ideas where the payoff isn’t easily predictable or easily apparent.

Clearly we can’t and don’t want to wind back the clock to a centrally-planned model of higher education.  Equally we cannot rely simply on the market.  A more flexible regulatory and financing approach is necessary, and we need to give form to the notion of mission-based funding compacts for each university that Labor proposed ahead of the 2007 election and has indicated subsequently its intention to progress in government.  I note that Deputy Prime Minister Gillard and Minister Carr have repeatedly declared their ambitions for the universities: structurally reformed, culturally reformed, socially inclusive and internationally competitive.  Hard to argue against – possible if not easy to achieve.

It is not enough to give universities what they ask for: more money, less regulation and more autonomy.  Or for universities to expect to be given what they ask for.  Much as we might be able to argue the compelling case for better generic funding, I can’t see that we stand a chance without conceding substantial reform and improvement.

To achieve what we need, we need not just Compacts but Compacts with teeth. We need Compacts that will hold us to hard decisions, validate and use evidence to agree and provide adequate support for our strengths and not simply endorse what we say about ourselves.

Compacts will fail to provide bold and different approaches if they are tied up in second-order metrics for shallow accountability reporting.

There must be some sharp incisors to bite through the surface of universities’ claims.  I suggest that the Government should complement negotiating teams of departmental officials with people with university experience (possibly international) who can exercise the discriminating judgements that will be necessary to validate the claims of universities against their missions.

The two main components of Compacts that may be on offer are the full funding of research and greater flexibility in financing of higher education.   I see these compacts working along the lines of the recent COAG reforms of Commonwealth-State specific purpose programs, to support additional performance-based actions on top of adequate (better) funding for core activities.  These would be significant reforms and I understand they need to be mutually beneficial for universities and the supporting community.

Hence, in return for full economic costs of research, I believe it is more than reasonable that universities should be able to demonstrate better knowledge of their costs, proper pricing, avoidance of internal cross subsidies, and improved management of their estates.

In return for improved funding, greater financing flexibility, and, for some universities, ‘deregulation’, I believe universities should be prepared to expand scholarships and bursaries for needy students, extend their outreach to raise aspirations and readiness of students from disadvantaged areas, and give greater attention to student success.

In a truly differentiated system it will be necessary to provide better support for students.  We must have a system that allows talented students, regardless of their life circumstances, to go to the university that best meets their ambitions and interests.  This will mean tackling the issue of income support, including rental assistance, if we are to develop a comprehensive strategy for improving the socio-economic mix of student enrolments in a markedly differentiated university system.

The participation rate of disadvantaged groups in higher education, notably students from low socio-economic backgrounds, Indigenous Australians, and Australians from regional and remote areas, remains low.

For many of these potential students and their parents, the additional education costs that cannot be deferred in the same way as HECS constitute an insurmountable burden – living expenses remain the major financial barrier to participation. Yet the system of student income support has not been reviewed by government since 1992.

I believe the Government is heading in the right direction with its three pillars of reform:

  • Expanding participation for the purposes of social inclusion and productivity improvement.
  • Focussing on internationally benchmarked quality as the key driver of investment in research and research training. An additional benefit of which might be to dispense with ‘perceptions’ and replace them with proven performance.
  • Increasing university flexibility and harmonising competitive processes with national public priorities.

I can simply enjoin the Government to stay on track, hold to the line and not get distracted by those who will seek a weaker course.  Even if there is to be a shortfall between the investment increases we need and the capacity of the economy to afford them for the time being, it is imperative that there is no compromise on the goals we set for ourselves and the standards we set for their achievement.

Anything less would sell Australia short.

Ian Chubb

Times Higher Education – QS World University Rankings (2008): a niche industry in formation?

The new Times Higher Education – QS World University Rankings (2008) rankings were just released, and the copyright regulations deepen and extend, push and pull, enable and constrain.  Global rankings: a niche industry in formation?

Kris Olds

New 2008 Shanghai rankings, by rankers who also certify rankers

Benchmarking, and audit culture more generally, are clearly the issues of the week. Following our coverage of a new Standard and Poor’s credit rating report regarding UK universities (‘Passing judgment’: the role of credit rating agencies in the global governance of UK universities‘), the Chronicle of Higher Education just noted that the 2008 Academic Ranking of World Universities (ARWU) (published by Shanghai Jiao Tong University) has been released on the web.

We’ve had more than a few stories about the pros and cons of rankings (e.g., 19 November’s  ‘University rankings: deliberations and future directions‘), but, of course, curiosity killed the cat so I eagerly plunged in for a quick scan.

Leaving aside the individual university scale, one of the most interesting representations of the data they collected, suspect though it might be, is this one:

The geographies, especially the disciplinary/field geographies, are noteworthy on a number of levels. The results are sure to propel the French (currently holding the rotating presidency of the Council of the European Union) into further action re., the deconstruction of the Shanghai methodology, and the development of alternatives (see my reference to this issue in the 6 July entry titled ‘Euro angsts, insights and actions regarding global university ranking schemes’).

I’m also not sure we can rely upon the recently established IREG-International Observatory on Academic Ranking and Excellence to shed unbiased light on the validity of the above table, and all the rest that are sure to be circulated, at the speed of light, through the global higher ed world over the next month or more. Why? Well, the IREG-International Observatory on Academic Ranking and Excellence, established on 18 April 2008, is supposed to:

review the conduct of “academic ranking” and expressions of “academic excellence” for the benefit of higher education, its stake-holders and the general public. This objective will be achieved by way of:

  • improving the standards, theory and practice in line with recommendations formulated in the Berlin Principles on Ranking of Higher Education Institutions;
  • initiating research and training related to ranking excellence;
  • analyzing the impact of ranking on access, recruitment trends and practices;
  • analyzing the role of ranking on institutional behavior;
  • enhancing public awareness and understanding of academic work.

Answering the explicit request of ranking bodies, the Observatory will review and assess selected rankings, based on methodological criteria and deontological standards of the Berlin Principles on Ranking of Higher Education Institutions. Successful ranking will be entities to declare they are “IREG Recognized”.

Now, who established the IREG-International Observatory on Academic Ranking and Excellence? A variety of ‘experts’ (photo below), including people associated with said Shanghai rankings, as well as U.S. News & World Report.

Forgive me if I am wrong, but is it not illogical, best intentions aside, to have rankers themselves on boards of institutions that seek to review “the conduct of ‘academic ranking’ and expressions of ‘academic excellence’ for the benefit of higher education, its stake-holders and the general public”, while also handing out IREG Recognized certifications (including to themselves, I presume)?

Kris Olds

Euro angsts, insights and actions regarding global university ranking schemes

The Beerkens’ blog noted, on 1 July, how the university rankings effect has even gone as far as reshaping immigration policy in the Netherlands. He included this extract, from a government policy proposal (‘Blueprint for a modern migration policy’):

Migrants are eligible if they received their degree from a university that is in the top 150 of two international league tables of universities. Because of the overlap, the lists consists of 189 universities…

Quite the authority being vetted in ranking schemes that are still in the process of being hotly debated!

On this broad topic, I’ve been traveling throughout Europe this academic year, pursuing a project not related to rankings, yet again and again rankings come up as a topic of discussion, reminding us of the de-facto global governance power of rankings (and the rankers). Ranking schemes, especially the Shanghai Jiao Tong University’s Academic Ranking of World Universities, and The Times Higher-QS World University Rankings are generating both governance impacts, and substantial anxiety, in multiple quarters.

In response, the European Commission is funding some research and thinking on the topic, while France’s new role in the rotating EU Presidency is supposed to lead to some further focus and attention over the next six months. More generally, here is a random list of European or Europe-based initiatives to examine the nature, impacts, and politics of global rankings:

And here are some recent or forthcoming events:

Yet I can’t help but wonder why Europe, which generally has high quality universities, despite some significant challenges, did not seek to shed light on the pros and cons of the rankings phenomenon any earlier. In other words, despite the critical mass of brainpower in Europe, what has hindered a collective, integrated, and well-funded interrogation of the ranking schemes from emerging before the ranking effects and path dependency started to take hold? Of course there was plenty of muttering, and some early research about rankings, and one could argue that I am viewing this topic through a rear view mirror, but Europe was, arguably, somewhat late in digging into this topic considering how much of an impact these assessment cum governance schemes are having.

So, if absence matters as much as presence in the global higher ed world, let’s ponder the absence of a serious European critique, or at least interrogation of, rankings and the rankers, until now. Let me put forward four possible explanations.

First, action at a European higher education scale has been focused upon bringing the European Higher Education Area to life via the Bologna Process, which was formally initiated in 1999. Thus there were only so many resources – intellectual and material – that could be allocated to higher education, so the Europeans are only now looking outwards to the power of rankings and the rankers. In short, key actors with a European higher education and research development vision have simply been too busy to focus on the rankings phenomenon and its effects.

A second explanation might be that European stakeholders are, deep down, profoundly uneasy about competition with respect to higher education, of which benchmarking and ranking is a part. But, as the Dublin Institute of Technology’s Ellen Hazelkorn notes in Australia’s Campus Review (27 May 2008):

Rankings are the latest weapon in the battle for world-class excellence. They are a manifestation of escalating global competition and the geopolitical search for talent, and are now a driver of that competition and a metaphor for the reputation race. What started out as an innocuous consumer product – aimed at undergraduate domestic students – has become a policy instrument, a management tool, and a transmitter of social, cultural and professional capital for the faculty and students who attend high-ranked institutions….

In the post-massification higher education world, rankings are widening the gap between elite and mass education, exacerbating the international division of knowledge. They inflate the academic arms race, locking institutions and governments into a continual quest for ever increasing resources which most countries cannot afford without sacrificing other social and economic policies. Should institutions and governments allow their higher education policy to be driven by metrics developed by others for another purpose?

It is worth noting that Ellen Hazelkorn is currently finishing an OECD-sponsored study on the effects of rankings.

In short, institutions associated with European higher education did not know how to assertively critique (or at least interrogate) ranking schemes as they never realized, until more recently, how ranking schemes are deeply geopolitical and geoeconomic vehicles that enable the powerful to maintain their standing, and harness yet even more resources inward. Angst regarding competition dulled senses to the intrinsically competitive logic of global university ranking schemes, and the political nature of their being.

Third, perhaps European elites, infatuated as they are with US Ivy League universities, or private institutions like Stanford, just accepted the schemes for the results summarized in this table from an OECD working paper (July 2007) written by Simon Marginson and Marijk van der Wende:

for they merely reinforced their acceptance of one form of American exceptionalism that has been acknowledged in Europe for some time. In other words, can one expect critiques of schemes that identify and peg, at the top, universities that many European elites would kill to send their children to, to emerge? I’m not so sure. As with Asia (where I worked from 1997-2001), and now in Europe, people seem infatuated with the standing of universities like Harvard, MIT, and Princeton, but these universities really operate in a parallel universe. Unless European governments, or the EU, are willing to establish 2-3 universities like King Abdullah University of Science and Technology (KAUST) in Saudi Arabia recently did with a $10 billion endowment, then angling to compete with the US privates should just be forgotten about. The new European Institute of Innovation and Technology (EIT) innovative as it may become, will not rearrange the rankings results, assuming they should indeed be rearranged.

Following what could be defined as a fait accompli phase, national and European political leaders came to progressively view the low status of European universities in the two key rankings schemes – Shanghai, and Times Higher – as a problematic situation. Why? The Lisbon Strategy emerges in 2000, was relaunched in 2005, and slowly starts to generate impacts, while also being continually retuned. Thus, if the strategy is to “become the most competitive and dynamic knowledge-based economy in the world, capable of sustainable economic growth with more and better jobs and greater social cohesion”, how can Europe become such a competitive global force when universities – key knowledge producers – are so far off fast emerging and now hegemonic global knowledge production maps?

In this political context, especially given state control over higher education budgets, and the relaunched Lisbon agenda drive, Europe’s rankers of ranking schemes were then propelled into action, in trebuchet-like fashion. 2010 is, after all, a key target date for a myriad of European scale assessments.

Fourth, Europe includes the UK, despite the feelings of many on both sides of the Channel. Powerful and well-respected institutions, with a wealth of analytical resources, are based in the UK, the global centre of calculation regarding bibliometrics (which rankings are a part of). Yet what role have universities like Oxford, Cambridge, Imperial College, UCL, and so on, or stakeholder organizations like Universities UK (UUK) and the Higher Education Funding Council for England (HEFCE), played in shedding light on the pros and cons of rankings for European institutions of higher education? I might be uninformed but the critiques are not emerging from the well placed, despite their immense experience with bibliometrics. In short as rankings aggregate data that works at a level of abstraction that hoves universities into view, and places UK universities highly (up there with Yale, Harvard and MIT), then these UK universities (or groups like UUK) will inevitably be concerned about their relative position, not the position of the broader regional system of which they are part, nor the rigour of the ranking methodologies. Interestingly, the vast majority of the above initiatives I listed only include representatives from universities that are ranked relatively low by the two main ranking schemes that now hold hegemonic power. I could also speculate on why the French contribution to the regional debate is limited, but will save that for another day.

These are but four of many possible explanations for why European higher education might have been relatively slow to grapple with the power and effects of university ranking schemes considering how much angst and impacts they generate. This said, you could argue, as Eric Beerkens has in the comments section below, that the European response was actually not late off the mark, despite what I argued above. The Shanghai rankings emerged in June 2003, and I still recall the attention they generated when they were first circulated. Three to five years for sustained action in some sectors is pretty quick, while in some sectors it is not.

In conclusion, it is clear that Europe has been destabilized by an immutable mobile – a regionally and now globally understood analytical device that holds together, travels across space, and is placed in reports, ministerial briefing notes, articles, PPT presentations, newspaper and magazine stories, etc. And it is only now that Europe is seriously interrogating the power of such devices, the data and methodologies that underly their production, and the global geopolitics and geoeconomics that they are part and parcel of.

I would argue that it is time to allocate substantial European resources to a deep, sustained, and ongoing analysis of the rankers, their ranking schemes, and associated effects. Questions remain, though, about how much light will be shed on the nature of university rankings schemes, what proposals or alternatives might emerge, and how the various currents of thought in Europe converge or diverge as some consensus is sought. Some institutions in Europe are actually happy that this ‘new reality’ has emerged for it is perceived to facilitate the ‘modernization’ of universities, enhance transparency at an intra-university scale, and elevate the role of the European Commission in European higher education development dynamics. Yet others equate rankings and classification schema with neoliberalism, commodification, and Americanization: this partly explains the ongoing critiques of the typology initiatives I linked to above, which are, to a degree, inspired by the German Excellence initiative, which is in turn partially inspired by a vision of what the US higher education system is.

Regardless, the rankings topic is not about to disappear. Let us hope that the controversies, debates, and research (current and future) inspire coordinated and rigorous European initiatives that will shed more light on this new form of defacto global governance. Why? If Europe does not do it, no one else will, at least in a manner that recognizes the diverse contributions that higher education can and should make to development processes at a range of scales.

Kris Olds

23 July update: see here for a review of a 2 juillet 2008 French Senate proposal to develop a new European ranking system that better reflects the nature of knowledge production (including language) in France and Europe more generally.  The full report (French only) can be downloaded here, while the press release (French only) can be read here.  France is, of course, going to publish a Senate report in French, though the likely target audience for the broader message (including a critique of the Shanghai Jiao Tong University’s Academic Ranking of World Universities) only partially understands French.  Yet in some ways it would have been better to have the report released simultaneously in both French and English.  But the contradictions of France critiquing dominant ranking schemes for their bias towards the English language, in English, was likely too much to take. In the end though, the French critique is well worth considering, and I can’t help but think that the EU or one of the many emerging initiatives above would be wise to have the report immediately translated and placed on some relevant websites so that it can be downloaded for review and debate.

Has higher education become a victim of its own propaganda?

eh.jpgEditor’s note: today’s guest entry was kindly written by Ellen Hazelkorn, Director, and Dean of the Faculty of Applied Arts, and Director, Higher Education Policy Research Unit (HEPRU), Dublin Institute of Technology, Ireland. She also works with the OECD’s Programme for Institutional Management of Higher Education (IMHE). Her entry should be read in conjunction with some of our recent entries on the linkages and tensions between the Bologna Process and the Lisbon Strategy, the role of foundations and endowments in facilitating innovative research yet also heightening resource inequities, as well as the ever present benchmarking and ranking debates.

~~~~~~~~~

councilpr.jpgThe recent Council of the European Union’s statement on the role of higher education is another in a long list of statements from the EU, national governments, the OECD, UNESCO, etc., proclaiming the importance of higher education (HE) to/for economic development. While HE has long yearned for the time in which it would head the policy agenda, and be rewarded with vast sums of public investment, it may not have realised that increased funding would be accompanied with calls for greater accountability and scrutiny, pressure for value-for-money, and organisational and governance reform. Many critics cite these developments as changing the fundamentals of higher education. Has higher education become the victim of its own propaganda?

At a recent conference in Brussels a representative from the EU reflected on this paradox. The Lisbon Strategy identified a future in which Europe would be a/the leader of the global knowledge economy. But when the statistics were reviewed, there was a wide gap between vision and reality. The Shanghai Academic Ranking of World Universities, which has become the gold standard of worldwide HE rankings, has identified too few European universities among the top 100. This was, he said, a serious problem and blow to the European strategy. Change is required, urgently.

sciencespo.jpgUniversity rankings are, whether we like it or not, beginning to influence the behaviour of higher education institutions and higher education policy because they arguably provide a snap-shot of competition within the global knowledge industrial sector (see E. Hazelkorn, Higher Education Management and Policy, 19:2, and forthcoming Higher Education Policy, 2008). Denmark and France have introduced new legislation to encourage mergers or the formation of ‘pôles’ to enhance critical mass and visibility, while Germany and the UK are using national research rankings or teaching/learning evaluations as a ‘market’ mechanism to effect change. Others, like Germany, Denmark and Ireland, are enforcing changes in institutional governance, replacing elected rectors with corporate CEO-type leadership. Performance funding is a feature everywhere. Even the European Research Council’s method of ‘empowering’ (funding) the researcher rather than the institution is likely to fuel institutional competition.

In response, universities and other HEIs are having to look more strategically at the way they conduct their business, organise their affairs, and the quality of their various ‘products’, e.g., educational programming and research. In return for increased autonomy, governments want more accountability; in return for more funding, governments want more income-generation; in return for greater support for research, governments want to identify ‘winners’; and in return for valuing HE’s contribution to society, governments want measurable outputs (see, for example, this call for an “ombudsman” for higher education in Ireland).

European governments are moving from an egalitarian approach – where all institutions are broadly equal in status and quality – to one in which excellence is promoted through elite institutions, differentiation is encouraged through competitive funding, public accountability is driven by performance measurements or institutional contacts, and student fees are a reflection of consumer buoyancy.

But neither the financial costs nor implications of this strategy – for both governments and institutions – have been thought through. The German government has invested €1.9b over five years in the Excellence Initiative but this sum pales into insignificance compared with claims that a single ‘world class’ university is a $1b – $1.5b annual operation, plus $500m with a medical school, or with other national investment strategies, e.g., China’s $20b ‘211 Project’ or Korea’s $1.2b ‘Brain 21’ programme, or with the fund-raising capabilities of US universities (‘Updates on Billion-Dollar Campaigns at 31 Universities’; ‘Foundations, endowments and higher education: Europe ruminates while the USA stratifies‘).

Given public and policy disdain for increased taxation, if European governments wish to compete in this environment, which policy objectives will be sacrificed? Is the rush to establish ‘world-class’ European universities hiding a growing gap between private and public, research and teaching, elite and mass education? Evidence from Ireland suggests that despite efforts to retain a ‘binary’ system, students are fleeing from less endowed, less prestigious institutes of technology in favour of ‘universities’. At one stage, the UK government promoted the idea of concentrating research activity in a few select institutions/centres until critics, notably the Lambert report and more recently the OECD, argued that regionality does matter.

Europeans are keen to establish a ‘world class’ HE system which can compete with the best US universities. But it is clear that such efforts are being undertaken without a full understanding of the implications, intended and unintended.

Ellen Hazelkorn

Benchmarking ‘the international student experience’

GlobalHigherEd has carried quite a few entries on benchmarking practices in the higher education sector over the past few month – the ‘world class’ university, the OECD innovation scoreboards, the World Bank’s Knowledge Assessment Methodology, Programme of International Student Assessment, and so on.

University World News this week have just reported on an interesting new development in international benchmarking practices – at least for the UK – suggesting, too, that the benchmarking machinery/industry is itself big business and likely to grow.

According to the University World News, the International Graduate Insight Group (or i-graduate) last week unveiled a study in the UK to:

…compare the expectations and actual experiences of both British and foreign students at all levels of higher education across the country. The Welsh Student Barometer will gather the opinions of up to 60,000 students across 10 Welsh universities and colleges. i-graduate will benchmark the results of the survey so that each university can see how its ability to match student expectations with other groupings of institutions, not only in Wales but also the rest of the world.

i-graduate markets itself as:

an independent benchmarking and research service, delivering comparative insights for the education sector worldwide: your finger on the pulse of student and stakeholder opinion.

We deliver an advanced range of dedicated market research and consultancy services for the education sector. The i-graduate network brings international insight, risk assessment and reassurance across strategy and planning, recruitment, delivery and relationship management.

i-graduate.jpg i-graduate have clearly been busy amassing information on ‘the international student experience’. It has collected responses from more than 100,000 students from over 90 countries by its International Student Barometer (ISB)- which they describe as the first truly global benchmark of the student experience. This information is packaged up (for a price) in multiple ways for different audiences, including leading UK universities. According to -i-graduate, the ISB is:

a risk management tool, enabling you to track expectations against the experiences of international students. The ISB isolates the key drivers of international student satisfaction and establishes the relative importance of each – as seen through the eyes of your students. The insight will tell you how expectations and experience affect their loyalty, their likelihood to endorse and the extent to which they would actively encourage or deter others.

Indexes like this, either providing information about one’s location in the hierarchy or as strategic information on brand loyalty, acts as a kind of disciplining and directing practice.

Those firms producing these indexes and barometers, like i-graduate, are also in reality packaging particular kinds of ‘knowledge’ about the sector and selling in the sector. In a recent seminar ESRC-funded seminar series on Changing Cultures of Competitiveness, Dr. Ngai-Ling Sum described these firms as brokering a ‘knowledge brand’ – a trade-marked, for a price, bundle of strategies/tools and insights intended to alter an individual’s, institution’s or nation’s practices, in turn leading to greater competitiveness – a phenomenon she tags to practices that are involved in producing the Knowledge-Based Economy (KBE).

It will be interesting to look more closely at, and report in a future blog on, what the barometer is measuring. For it is the specific socio-economic and political content of these indexes and barometers, as well as the disciplining and directing practices involved, which are important for understanding the direction of global higher education.

Susan Robertson

Thomson Innovation, UK Research Footprints®, and global audit culture

Thomson Scientific, the private firm fueling the bibliometrics drive in academia, is in the process of positioning itself as the anchor point for data on intellectual property (IP) and research. Following tantalizers in the form of free reports such as World IP Today: A Thomson Scientific Report on Global Patent Activity from 1997-2006 (from which the two images below are taken), Thomson Scientific is establishing, in phases, Thomson Innovation, which will provide, when completed:

  • Comprehensive prior art searching with the ability to search patents and scientific literature simultaneously
  • Expanded Asian patent coverage, including translations of Japanese full-text and additional editorially enhanced abstracts of Chinese data
  • A fully integrated searchable database combining Derwent World Patent Index® (DWPISM) with full-text patent data to provide the most comprehensive patent records available
  • Support of strategic intellectual property decisions through:
    • powerful analysis and visualization tools, such as charting, citation mapping and search result ranking
    • and, integration of business and news resources
  • Enhanced collaboration capabilities, including customizable folder structures that enable users to organize, annotate, search and share relevant files.

thomsonpatent1.jpg

thomsonpatent2.jpg

Speaking of bibliometrics, Evidence Ltd., the private firm that is shaping some of the debates about the post-Research Assessment Exercise (RAE) system of evaluating research quality and impact in UK universities, recently released the UK Higher Education Research Yearbook 2007. This £255 (for higher education customers) report:

[P]rovides the means to gain a rapid overview of the research strengths of any UK Higher Education institution, and compare its performance with that of its peers. It is an invaluable tool for those wishing to assess their own institution’s areas of relative strength and weakness, as well as versatile directory for those looking to invest in UK research. It will save research offices in any organisation with R&D links many months of work, allowing administrative and management staff the opportunity to focus on the strategic priorities that these data will help to inform….

It sets out in clear diagrams and summary tables the research profile for Universities and Colleges funded for research. Research Footprints® compare each institution’s performance to the average for its sector, allowing strengths and weaknesses to be rapidly identified by research managers and by industrial customers.

See below, for one example of how a sample university (in this case the University of Warwick) has its “Research Footprint®” graphically represented. This image is included in a brief article about Warwick by Vice-Chancellor Nigel Thrift, and is available on Warwick’s News & Events website.

warwickfootprint.jpg

sasquatch.jpgGiven the metrics that are utilized, it is clear, even if the data is not published, that individual researchers’ footprints will be available for systematic and comparative analysis, thereby enabling the governance of faculty with the back-up of ‘data’, and the targeted recruitment of the ‘big foot’ wherever s/he resides (though Sasquatches presumably need not apply!).

Kris Olds