Ranking – in a different (CHE) way?

uwe_brandenburg_2006-005nl GlobalHigherEd has been profiling a series of entries on university rankings as an emerging industry and technology of governance. This entry has been kindly prepared for us by Uwe Brandenburg. Since 2006 Uwe has been project manager at the Centre for Higher Education Development (CHE) and CHE Consult, a think tank and consultancy focusing on higher education reform.  Uwe has an MA in Islamic Studies, Politics and Spanish from the University of Münster (Germany),  and an MscEcon in Politics from the University of Wales at Swansea.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Talking about rankings usually means talking about league tables. Values are calculated based on weighed indicators which are then turned into a figure, added and formed into an overall value, often with the index of 100 for the best institution counting down. Moreover, in many cases entire universities are compared and the scope of indicators is somewhat limited. We at the Centre for Higher Education Development (CHE) are highly sceptical about this approach. For more than 10 years we have been running our own ranking system which is so different to the point that  some experts  have argued that it might not be a ranking at all which is actually not true. Just because the Toyota Prius is using a very different technology to produce energy does not exclude it from the species of automobiles. What are then the differences?

uwe1

Firstly, we do not believe in the ranking of entire HEIs. This is mainly due to the fact that such a ranking necessarily blurs the differences within an institution. For us, the target group has to be the starting point of any ranking exercise. Thus, one can fairly argue that it does not help a student looking for a physics department to learn that university A is average when in fact the physics department is outstanding, the sociology appalling and the rest is mediocre. It is the old problem of the man with his head in the fire and the feet in the freezer. A doctor would diagnose that the man is in a serious condition while a statistician might claim that over all he is doing fine.

So instead we always rank on the subject level. And given the results of the first ExcellenceRanking which focused on natural sciences and mathematics in European universities with a clear target group of prospective Master and PhD students, we think that this proves the point;  only 4 institutions excelled in all four subjects; another four in three; while most excelled in only one subject. And this was in a quite closely related field.

uwe2

Secondly, we do not create values by weighing indicators and then calculating an overall value. Why is that? The main reason is that any weight is necessarily arbitrary, or in other words political. The person weighing decides which weight to give. By doing so, you pre-decide the outcome of any ranking. You make it even worse when you then add the different values together and create one overall value because this blurs differences between individual indicators.

Say a discipline is publishing a lot but nobody reads it. If you give publications a weight of 2 and citations a weight of one, it will look like the department is very strong. If you do it the other way, it will look pretty weak. If you add the values you make it even worse because you blur the difference between both performances. And those two indicators are even rather closely related. If you summarize results from research indicators with reputation indicators, you make things entirely irrelevant.

Instead, we let the indicator results stand for their own and let the user decide what is important for his or her personal decision-making process. e.g., in the classical ranking we allow the users to create “my ranking” so they can choose the indicators they want to look at and in which order.

Thirdly, we strongly object to the idea of league tables. If the values which create the table are technically arbitrary (because of the weighing and the accumulation), the league table positions create the even worse illusion of distinctive and decisive differences between places. They then bring alive the impression of an existing difference in quality (no time or space here to argue the tricky issue of what quality might be) which is measurable to the percentage point. In other words, that there is a qualitative and objectively recognizable measurable difference between place number 12 and 15. Which is normally not the case.

Moreover, small mathematical differences can create huge differences in league table positions. Take the THES QS: even in the subject cluster SocSci you find a mere difference of 4.3 points on a 100 point scale between league rank 33 and 43. In the overall university rankings, it is a meager 6.7 points difference between rank 21 and 41 going down to a slim 15.3 points difference between rank 100 and 200. That is to say, the league table positions of HEIs might differ by much less than a single point or less than 1% (of an arbitrarily set figure). Thus, it tells us much less than the league position suggests.

Our approach, therefore, is to create groups (top, middle, bottom) which are referring to the performance of each HEI relative to the other HEIs.

uwe3

This means our rankings are not as easily read as the others. However,  we strongly believe in the cleverness of the users. Moreover, we try to communicate at every possible level that every ranking (and therefore also ours) is based on indicators which are chosen by the ranking institution. Consequently, the results of the respective ranking can tell you something about how an HEI performs in the framework of what the ranker thinks interesting, necessary, relevant, etc. Rankings therefore NEVER tell you who is the best but maybe (depending on the methodology) who is performing best (or in our cases better than average) in aspects considered relevant by the ranker.

A small, but highly relevant aspect might be added here. Rankings (in the HE system as well as in other areas of life) might suggest that a result in an indicator proves that an institution is performing well in the area measured by the indicator. Well it does not. All an indicator does is hint at the fact that given the data is robust and relevant, the results give some idea of how close the gap is between the performance of the institution and the best possible result (if such a benchmark exists). The important word is “hint” because “indicare” – from which the word “indicator” derives – means exactly this: a hint, not a proof. And in the case of many quantitative indicators, the “best” or “better” is again a political decision if the indicator stands alone (e.g. are more international students better? Are more exchange agreements better?).

This is why we argue that rankings have a useful function in terms of creating transparency if they are properly used, i.e. if the users are aware of the limitations, the purpose, the target groups and the agenda of the ranking organization and if the ranking is understood as one instrument among various others fit to make whatever decision related to an HEI (study, cooperation, funding, etc.).

Finally, modesty is maybe what a ranker should have in abundance. Running the excellence ranking in three different phases (initial in 2007, second phase with new subjects right now, repetition of natural sciences just starting) I am aware of certainly one thing. However strongly we aim at being sound and coherent, and however intensely we re-evaluate our efforts, there is always the chance of missing something; of not picking an excellent institution. For the world of ranking, Einstein’s conclusion holds a lot of truth:

Not everything that can be counted, counts and not everything that counts can be counted.

For further aspects see:
http://www.che-ranking.de/cms/?getObject=47&getLang=de
http://www.che-ranking.de/cms/?getObject=44&getLang=de
Federkeil, Gero, Rankings and Quality Assurance in Higher Education, in: Higher Education in Europe, 33, (2008), S. 209-218
Federkeil, Gero, Ranking Higher Education Institutions – A European Perspective., in: Evaluation in Higher Education, 2, (2008), S. 35 – 52
Other researchers specialising in this (and often referring to our method) are e.g. Alex Usher, Marijk van der Wende or Simon Marginson.

Uwe Brandenburg

University institutional performance: HEFCE, UK universities and the media

deem11 This entry has been kindly prepared by Rosemary Deem, Professor of Sociology of Education, University of Bristol, UK. Rosemary’s expertise and research interests are in the area of higher education, managerialism, governance, globalization, and organizational cultures (student and staff).

Prior to her appointment at Bristol, Rosemary was Dean of Social Sciences at the University of Lancaster. Rosemary has served as a member of ESRC Grants Board 1999-2003, and Panel Member of the Education Research Assessment Exercise 1996, 2001, 2008.

GlobalHigherEd invited Rosemary to respond to one of the themes (understanding institutional performance) in the UK’s Higher Education Debate aired by the Department for Innovation, Universities and Skills  (DIUS) over 2008.

~~~~~~~~~~~~~~

Institutional performance of universities and their academic staff and students is a very topical issue in many countries, for potential students and their families and sponsors, governments and businesses. As well as numerous national rankings, two annual international league tables in particular, the Shanghai Jiao Tong,  developed for the Chinese government to benchmark its own universities and the commercial Times Higher top international universities listings, are the focus of much government and institutional  interest,  as  universities vie with each other to appear in the top rankings of so-called world-class universities, even though the quest for world-class status has negative as well as positive consequences for national higher education systems (see here).

International league tables often build on metrics that are themselves international (e.g publication citation indexes) or use proxies for quality such as the proportions of international students or staff/student ratios, whereas national league tables tend to develop their own criteria, as the UK Research Assessment Exercise (RAE) has done and as its planned replacement, the Research Excellence Framework is intended to do. deem2

In March 2008, John Denham, Secretary of State for (the Department of) Innovation, Universities and Skills (or DIUS) commissioned the Higher Education Funding Council for England (HEFCE) to give some advice on measuring institutional performance. Other themes  on which the Minister commissioned advice, and which will be reviewed on GlobalHigherEd over the next few months, were On-Line Higher Education Learning, Intellectual Property and research benefits; Demographic challenge facing higher education; Research Careers; Teaching and the Student Experience; Part-time studies and Higher Education; Academia and public policy making; and International issues in Higher Education.

Denham identified five policy areas for the report on ‘measuring institutional performance’ that is the concern of this entry, namely: research, enabling business to innovate and engagement in knowledge transfer activity, high quality teaching, improving work force skills and widening participation.

This list could be seen as a predictable one since it relates to current UK government policies on universities and strongly emphasizes the role of higher education in producing employable graduates and relating its research and teaching to business and the ‘knowledge economy’.

Additionally, HEFCE already has quality and success measures and also surveys, such as the National Student Survey of all final year undergraduates for everything except workforce development.  The five areas are a powerful indicator of what government thinks the purposes of universities are, which is part of a much wider debate (see here and here).

On the other hand, the list is interesting for what it leaves out – higher education institutions and their local communities (which is not just about servicing business), or universities’ provision for supporting the learning of their own staff (since they are major employers in their localities) or the relationship between teaching and research

The report makes clear that HEFCE wants to “add value whilst minimising the unintended consequences”, (p. 2), would like to introduce a code of practice for the use of performance measures and does not want to introduce more official league tables in the five policy areas.  There is also a discussion about why performance is measured: it may be for funding purposes, to evaluate new policies, inform universities so they can make decisions about their strategic direction, improve performance or to inform the operation of markets. The disadvantages of performance measures, the tendency for some measures to be proxies (which will be a significant issue if plans to use metrics and bibliometrics  as proxies for research quality in  the new Research Excellence Framework are adopted) and the tendency to measure activity and volume but not impact are also considered in the report.

However, what is not emphasized enough are that the consequences once a performance measure is made public are not within anyone’s control.  Both the internet and the media ensure that this is a significant challenge.  It is no good saying that “Newspaper league tables do not provide an accurate picture of the higher education sector” (p 7) but then taking action which invalidates this point.

Thus in the RAE 2008, detailed cross-institutional results were made available by HEFCE to the media before they are available to the universities themselves last week, just so that newspaper league tables can be constructed.

Now isn’t this an example of the tail wagging the dog, and being helped by HEFCE to do so? Furthermore, market and policy incentives may conflict with each other.  If an institution’s student market is led by middle-class students with excellent exam grades, then urging them to engage in widening participation can fall on deaf ears.   Also, whilst UK universities are still in receipt of significant public funding, many also generate substantial private funding too and some institutional heads are increasingly irritated by tight government controls over what they do and how they do it.

Two other significant issues are considered in the report. One is value-added measures, which HEFCE feels it is not yet ready to pronounce on.  Constructing these for schools has been controversial and the question of over what period should value added measures be collected is problematic, since HEFCE measures would look only at what is added to recent graduates, not what happens to them over the life course as a whole.

The other issue is about whether understanding and measuring different dimensions of institutional performance could help to support diversity in the sector.  It is not clear how this would work for the following three reasons:

  1. Institutions will tend to do what they think is valued and has money attached, so if the quality of research is more highly valued and better funded than quality of teaching, then every institution will want to do research.
  2. University missions and ‘brands’ are driven by a whole multitude of factors and importantly by articulating the values and visions of staff and students and possibly very little by ‘performance’ measures; they are often appealing to an international as well as a national audience and perfect markets with detailed reliable consumer knowledge do not exist in higher education.
  3. As the HEFCE report points out, there is a complex relationship between research, knowledge transfer, teaching, CPD and workforce development in terms of economic impact (and surely social and cultural impact too?). Given that this is the case, it is not evident that encouraging HEIs to focus on only one or two policy areas would be helpful.

There is a suggestion in the report that web-based spidergrams based on an seemingly agreed (set of performance indicators might be developed which would allow users to drill down into more detail if they wished). Whilst this might well be useful, it will not replace or address the media’s current dominance in compiling league tables based on a whole variety of official and unofficial performance measures and proxies. Nor will it really address the ways in which the “high value of the UK higher education ‘brand’ nationally and internationally” is sustained.

Internationally, the web and word of mouth are more critical than what now look like rather old-fashioned performance measures and indicators.  In addition, the economic downturn and the state of the UK’s economy and sterling are likely to be far more influential in this than anything HEFCE does about institutional performance.

The report, whilst making some important points, is essentially introspective, fails to sufficiently grasp how some of its own measures and activities are distorted by the media, does not really engage with the kinds of new technologies students and potential students are now using (mobile devices, blogs, wikis, social networking sites, etc) and focuses far more on national understandings of institutional performance than on how to improve the global impact and understanding of UK higher education.

Rosemary Deem

Message 1: ‘RAE2008 confirms UK’s dominant position in international research’

Like the launch of a spaceship at Cape Canaveral, the UK Research Assessment Exercise (RAE) is being prepared for full release.  The press release was loaded up 14 minutes ago (and is reprinted below).  Careers, and department futures, will be made and broken when the results emerge in 46 minutes.

Note how they frame the results ever so globally; indeed far more so than in previous RAEs.  I’ll be reporting back tomorrow when the results are out, and I’ve had a chance to unpack what “international” means, and also assess just how “international” the make-up of the review panels — both the main and sub-panels — is (or is not), and what types of international registers were taken into account when assessing ‘quality’. In short, can one self-proclaim a “dominant position” in the international research landscape, and if so on what basis? Leaving aside the intra-UK dynamics (and effects) at work here, this RAE is already turning out to be a mechanism to position a research nation within the global research landscape. But for what purpose?

RAE2008 confirms UK’s dominant position in international research

18 December 2008

The results of the 2008 Research Assessment Exercise (RAE2008) announced today confirm the dominant position that universities and colleges in the United Kingdom hold in international research.

RAE2008, which is based on expert review, includes the views of international experts in all the main subject areas. The results demonstrate that 54% of the research conducted by 52,400 staff submitted by 159 universities and colleges is either ‘world-leading’ (17 per cent in the highest grade) – or ‘internationally excellent’ (37 per cent in the second highest grade).

Taking the top three grades together (the third grade represents work of internationally recognised quality), 87% of the research activity is of international quality. Of the remaining research submitted, nearly all is of recognised national quality in terms of originality, significance and rigour.

Professor David Eastwood, Chief Executive of HEFCE, said:

“This represents an outstanding achievement, confirming that the UK is among the top rank of research powers in the world. The outcome shows more clearly than ever that there is excellent research to be found across the higher education sector. A total of 150 of the 159 institutions have some work of world-leading quality, while 49 have research of the highest quality in all of their submissions.

“The 2008 RAE has been a detailed, thorough and robust assessment of research quality. Producing quality profiles for each submission – rather than single-point ratings – has enabled the panels to exercise finer degrees of judgement. The assessment process has allowed them to take account of the full breadth of research quality, including inter-disciplinary, applied, basic and strategic research wherever it is located.

“Although we cannot make a direct comparison with the previous exercise carried out in 2001, we can be confident that the results are consistent with other benchmarks indicating that the UK holds second place globally to the US in significant subject fields. One of the most encouraging factors is that the panels reported very favourably on the high-quality work undertaken by early career researchers, which will help the UK to maintain this leading position in the future.”

John Denham, Secretary of State for Innovation, Universities and Skills, said:

“The latest RAE reinforces the UK’s position as a world leader in research and I congratulate our universities and colleges for achieving such outstanding results.

“The fact that over 50 per cent of research is either ‘world-leading or ‘internationally excellent’ further confirms that the UK continues to punch above its weight in this crucial field.

“To maintain global excellence during these challenging economic times it will be vital to continue to invest in research, this is why we have committed to fund almost £6bn in research and innovation in England by 2011.”

Key findings:

  • 54% of the research is either ‘world-leading’ (17% in 4*) – or ‘internationally excellent’ (37% in 3*)
  • 1,258 of the 2,363 submissions (53% of total) had at least 50% of their activity rated in the two highest grades. These submissions were found in 118 institutions
  • All the submissions from 16 institutions had at least 50% of their activity assessed as 3* or 4*
  • 84% of all submissions were judged to contain at least 5% world-leading quality research
  • 150 of the 159 higher education institutions (HEIs) that took part in RAE2008 demonstrated at least 5% world-leading quality research in one or more of their submissions
  • 49 HEIs have at least some world-leading quality research in all of their submissions.

The ratings scale, which was included in the press release, is pasted in below:

raescales

Kris Olds

Multi-scalar governance technologies vs recurring revenue: the dual logics of the rankings phenomenon

Our most recent entry (‘University Systems Ranking (USR)’: an alternative ranking framework from EU think-tank‘) is getting heavy traffic these days, a sign that the rankings phenomenon just won’t go away.  Indeed there is every sign that debates about rankings will be heating up over the next 1-2 year in particular, courtesy of the desire of stakeholders to better understand rankings, generate ‘recurring revenue’ off of rankings, and provide new governance technologies to restructure higher education and research systems.

This said I continue to be struck, as I travel to selective parts of the world for work, by the diversity of scalar emphases at play.

eiffeleu1In France, for example, the broad discourse about rankings elevates the importance of the national (i.e., French) and regional (i.e., European) scales, and only then does the university scale (which I will refer to as the institutional scale in this entry) come into play in importance terms. This situation reflects the strong role of the national state in governing and funding France’s higher education system, and France’s role in European development debates (including, at the moment, presidency of the Council of the European Union).

In UK it is the disciplinary/field and then the institutional scales that matter most, with the institutional made up of a long list of ranked disciplines/fields. Once the new Research Assessment Exercise (RAE) comes out in late 2008 we will see the institutional assess the position of each of their disciplines/fields, which will then lead to more support or relatively rapid allocation of the hatchet at the disciplinary/field level. This is in part because much national government funding (via the Higher Education Funding Council for England (HEFCE), the Scottish Funding Council (SFC), the Higher Education Funding Council for Wales (HEFCW) and the Department for Employment and Learning, Northern Ireland (DEL)) to each university is structurally dependent upon the relative rankings of each university’s position in the RAE, which is the aggregate effect of the position of the array of fields/disciplines in any one university (see this list from the University of Manchester for an example). The UK is, of course, concerned about its relative place in the two main global ranking schemes, but it doing well at the moment so the scale of concern is of a lower order than most other countries (including all other European countries). Credit rating agencies also assess and factor in rankings with respect to UK universities (e.g. see ‘Passing judgment’: the role of credit rating agencies in the global governance of UK universities‘).

In the US – supposedly the most marketized of contexts – there is highly variably concern with rankings.  Disciplines/fields ranked by media outlets like U.S. News & World Report are concerned, to be sure, but U.S. News & World Report does not allocate funding. Even the National Research Council (NRC) rankings matter less in the USA given that its effects (assuming it eventually comes out following multiple delays) are more diffuse. The NRC rankings are taken note of by deans and other senior administrators, and also faculty, albeit selectively. Again, there is no higher education system in the US – there are systems. I’ve worked in Singapore, England and the US as a faculty member and the US is by far the least addled or concerned by ranking systems, for good and for bad.

While the diversity of ranking dispositions at the national and institutional levels is heterogeneous in nature, the global rankings landscape is continuing to change, and quickly. In the remainder of this entry we’ll profile but two dimensions of the changes.

Anglo-American media networks and recurrent revenue

ustheFirst, new key media networks, largely Anglo-American private sector networks, have become intertwined.  As Inside Higher Ed put it on 24 November:

U.S. News & World Report on Friday announced a new, worldwide set of university rankings — which is really a repackaging of the international rankings produced this year in the Times Higher Education-QS World University Rankings. In some cases, U.S. News is arranging the rankings in different ways, but Robert Morse, director of rankings at the magazine, said that all data and the methodology were straight from the Times Higher’s rankings project, which is affiliated with the British publication about higher education. Asked if his magazine was just paying for reprint rights, Morse declined to discuss financial arrangements. But he said that it made sense for the magazine to look beyond the United States. “There is worldwide competition for the best faculty, best students and best research grants and researchers,” he said. He also said that, in the future, U.S. News may be involved in the methodology. Lloyd Thacker, founder of the Education Conservancy and a leading critic of U.S. News rankings, said of the magazine’s latest project: “The expansion of a business model that has profited at the expense of education is not surprising. This could challenge leaders to distinguish American higher education by providing better indicators of quality and by helping us think beyond ranking.”

This is an unexpected initiative, in some ways, given that the Times Higher Education-QS World University Rankings are already available on line and US New and World Report is simply repackaging these for sale in the American market. Yet if you adopt a market-making perspective this joint venture makes perfect sense. Annual versions of the Times Higher Education-QS World University Rankings will be reprinted in a familiar (to US readers) format, thereby enabling London-based TSL Education Ltd., London/Paris/Singapore-based QS Quacquarelli Symonds, and Washington DC-based U.S. News and World Report to generate recurring revenue with little new effort (apart from repackaging and distribution in the US). The enabling mechanism is, in this case, reprint rights fees. As we have noted before, this is a niche industry in formation, indeed.

More European angst and action

And second, at the regional level, European angst (an issue we profiled on 6 July in ‘Euro angsts, insights and actions regarding global university ranking schemes‘) about the nature and impact of rankings is leading to the production of critical reports on rankings methodologies, the sponsorship of high powered multi-stakeholder workshops, and the emergence of new proposals for European ranking schemes.

ecjrccoverSee, for example, this newly released report on rankings titled Higher Education Rankings: Robustness Issues and Critical Assessment, which is published by the European Commission Joint Research Centre, Institute for the Protection and Security of the Citizen, Centre for Research on Lifelong Learning (CRELL)

The press release is here, and a detailed abstract of the report is below:

The Academic Ranking of World Universities carried out annually by the Shanghai’s Jiao Tong University (mostly known as the ‘Shanghai ranking’) has become, beyond the intention of its developers, a reference for scholars and policy makers in the field of higher education. For example Aghion and co-workers at the Bruegel think tank use the index – together with other data collected by Bruegel researchers – for analysis of how to reform Europe’s universities, while French President Sarkozy has stressed the need for French universities to consolidate in order to promote their ranking under Jiao Tong. Given the political importance of this field the preparation of a new university ranking system is being considered by the French ministry of education.

The questions addressed in the present analysis is whether the Jiao Tong ranking serves the purposes it is used for, and whether its immediate European alternative, the British THES, can do better.

Robustness analysis of the Jiao Tong and THES ranking carried out by JRC researchers, and of an ad hoc created Jiao Tong-THES hybrid, shows that both measures fail when it comes to assessing Europe’s universities. Jiao Tong is only robust in the identification of the top performers, on either side of the Atlantic, but quite unreliable on the ordering of all other institutes. Furthermore Jiao Tong focuses only on the research performance of universities, and hence is based on the strong assumption that research is a universal proxy for education. THES is a step in the right direction in that it includes some measure of education quality, but is otherwise fragile in its ranking, undeniably biased towards British institutes and somehow inconsistent in the relation between subjective variables (from surveys) and objective data (e.g. citations).

JRC analysis is based on 88 universities for which both the THES and Jiao Tong rank were available. European universities covered by the present study thus constitute only about 0.5% of the population of Europe’s universities. Yet the fact that we are unable to reliably rank even the best European universities (apart from the 5 at the top) is a strong call for a better system, whose need is made acute by today’s policy focus on the reform of higher education. For most European students, teachers or researchers not even the Shanghai ranking – taken at face value and leaving aside the reservations raised in the present study – would tell which university is best in their own country. This is a problem for Europe, committed to make its education more comparable, its students more mobile and its researchers part of a European Research Area.

Various attempts in EU countries to address the issue of assessing higher education performance are briefly reviewed in the present study, which offers elements of analysis of which measurement problem could be addressed at the EU scale. [my emphasis]

While ostensibly “European”, does it really matter that the Times Higher Education-QS World University Ranking is produced by firms with European headquarters, while the Jiao Tong ranking is produced by an institution based in China?

The divergent logics underlying the production of discourses about rankings are also clearly visible in two related statements. At the bottom of the European Commission’s Joint Research Centre report summarized above we see “Reproduction is authorised provided the source is acknowledged”, while the Times Higher Education-QS World University Rankings, a market-making discourse, is accompanied by a lengthy copyright warning that can be viewed here.

Yet do not, for a minute, think that ‘Europe’ does not want to be ranked, or use rankings, as much if not more than any Asian or American or Australian institution. At a disciplinary/field level, for example, debates are quickly unfolding about the European Reference Index for the Humanities (ERIH), a European Science Foundation (ESF) backed initiative that has its origins in deliberations about the role of the humanities in the European Research Area. The ESF frames it this way:

Humanities research in Europe is multifaceted and rich in lively national, linguistic and intellectual traditions. Much of Europe’s Humanities scholarship is known to be first rate. However, there are specifities of Humanities research, that can make it difficult to assess and compare with other sciences. Also,  it is not possible to accurately apply to the Humanities assessment tools used to evaluate other types of research. As the transnational mobility of researchers continues to increase, so too does the transdisciplinarity of contemporary science. Humanities researchers must position themselves in changing international contexts and need a tool that offers benchmarking. This is why ERIH (European Reference Index for the Humanities) aims initially to identify, and gain more visibility for top-quality European Humanities research published in academic journals in, potentially, all European languages. It is a fully peer-reviewed, Europe-wide process, in which 15 expert panels sift and aggregate input received from funding agencies, subject associations and specialist research centres across the continent. In addition to being a reference index of the top journals in 15 areas of the Humanities, across the continent and beyond, it is intended that ERIH will be extended to include book-form publications and non-traditional formats. It is also intended that ERIH will form the backbone of a fully-fledged research information system for the Humanities.

See here for a defense of this ranking system by Michael Worton (Vice-Provost, University College London, and a member of the ERIH steering committee).  I was particularly struck by this comment:

However, the aim of the ERIH is not to assess the quality of individual outputs but to assess dissemination and impact. It can therefore provide something that the RAE cannot: it can be used for aggregate benchmarking of national research systems to determine the international standing of research carried out in a particular discipline in a particular country.

Link here for a Google weblog search on this debate, while a recent Chronicle of Higher Education article (‘New Ratings of Humanities Journals Do More Than Rank — They Rankle’) is also worth reviewing.

Thus we see a new rankings initiative emerging to enable (in theory) Europe to better codify its highly developed humanities presence on the global research landscape, but in a way that will enable national (at the intra-European scale) peaks (and presumably) valleys of quality output to be mapped for the humanities, but also for specific disciplines/fields. Imagine the governance opportunities available, at multiple scales, if this scheme is operationalized.

And finally, at the European scale again, University World News noted, on 23 November, that:

The European Union is planning to launch its own international higher education rankings, with emphasis on helping students make informed choices about where to study and encouraging their mobility. Odile Quintin, the European Commission’s Director-General of Education and Culture, announced she would call for proposals before the end of the year, with the first classification appearing in 2010.

A European classification would probably be compiled along the same lines as the German Centre for Higher Education Development Excellence Ranking.

European actors are being spurred into such action by multiple forces, some internal (including the perceived need to ‘modernize European universities in the context of Lisbon and the European Research Area), some external (Shanghai Jiao Tong; Times Higher QS), and some of a global dimension (e.g., audit culture; competition for mobile students).

eurankingsprogThis latest push is also due to the French presidency of the Council of the European Union, as noted above, which is facilitating action at the regional and national scales. See, for example, details on a Paris-based conference titled ‘International comparison of education systems: a european model?’ which was held on 13-14 November 2008. As noted in the programme, the:

objective of the conference is to bring to the fore the strengths and weaknesses of the different international and European education systems, while highlighting the need for regular and objective assessment of the reforms undertaken by European Member States by means of appropriate indicators. It will notably assist in taking stock of:
– the current state and performance of the different European education systems:
– the ability of the different European education systems to curb the rate of failure in schools,
– the relative effectiveness of amounts spent on education by the different Member States.

The programme and list of speakers is worth perusing to acquire a sense of the broad agenda being put forward.

Multi-scalar governance vs (?) recurring revenue: the emerging dual logics of the rankings phenomenon

The rankings phenomenon is here to stay. But which logics will prevail, or at least emerge as the most important in shaping the extension of audit culture into the spheres of higher education and research?  At the moment it appears that the two main logics are:

  • Creating a new niche industry to form markets and generate recurrent revenue; and,
  • Creating new multi-scalar governance technologies to open up previously opaque higher education and research systems, so as to facilitate strategic restructuring for the knowledge economy.

These dual logics are in some ways contradictory, yet in other ways they are interdependent. This is a phenomenon that also has deep roots in the emerging centres of global higher ed and research calculation that are situated in London, Shanghai, New York, Brussels, and Washington DC.  And it is underpinned by the analytical cum revenue generating technologies provided by the Scientific division of Thomson Reuters, which develops and operates the ISI Web of Knowledge.

Market-making and governance enabling…and all unfolding before our very eyes. Yet do we really know enough about the nature of the unfolding process, including the present and absent voices, that seems to be bringing these logics to the fore?

Kris Olds

‘University Systems Ranking (USR)’: an alternative ranking framework from EU think-tank

One of the hottest issues out there still continuing to attract world-wide attention is university rankings. The two highest profile ranking systems, of course, are the Shanghai Jiao Tong and the Times Higher rankings, both of which focus on what might constitute a world class university, and on the basis of that, who is ranked where. Rankings are also part of an emerging niche industry. All this of course generates a high level of institutional, national, and indeed supranational (if we count Europe in this) angst about who’s up, who’s down, and who’s managed to secure a holding position. And whilst everyone points to the flaws in these ranking systems, these two systems have nevertheless managed to capture the attention and imagination of the sector as a whole. In an earlier blog enty this year GlobalHigherEd mused over why European-level actors had not managed to produce an alternate system of university rankings which might counter the hegemony of the powerful Shanghai Jiao Tong (whose ranking system privileges the US universities) on the one hand, and act as a policy lever that Europe could pull to direct the emerging European higher education system, on the other.

Yesterday The Lisbon Council, an EU think-tank (see our entry here for a profile of this influential think-tank) released which might be considered a challenge to the Shanghai Jiao Tong and Times Higher ranking schemes – a University Systems Ranking (USR) in their report University Systems Ranking Citizens and Society in the Age of Knowledge. The difference between this ranking system and the Shanghai and Times is that it focuses on country-level data and change, and not  individual institutions.

The USR has been developed by the Human Capital Center at The Lisbon Council, Brussels (produced with support by the European Commission’s Education, Audiovisual and Culture Executive Agency) with advice from the OECD.

The report begins with the questions: why do we have university systems? What are these systems intended to do? And what do we expect them to deliver – to society, to individuals and to the world at large? The underlying message in the USR is that “a university system has a much broader mandate than producing hordes of Nobel laureates or cabals of tenure – and patent bearing professors” (p. 6).

So how is the USR different, and what might we make of this difference for the development of universities in the future? The USR is based on six criteria:

  1. Inclusiveness – number of students enrolled in the tertiary sector relative to the size of its population
  2. Access – ability of a country’s tertiary system to accept and help advance students with a low level of scholastic aptitude
  3. Effectiveness – ability of country’s education system to produce graduates with skills relevant to the country’s labour market (wage premia is the measure)
  4. Attractiveness – ability of a country’s system to attract a diverse range of foreign students (using the top 10 source countries)
  5. Age range – ability of a country’s tertiary system to function as a lifelong learning institution (share of 30-39 year olds enrolled)
  6. Responsiveness – ability of the system to reform and change – measured by speed and effectiveness with which Bologna Declaration accepted (15 of 17 countries surveyed have accepted the Bologna criteria.

These are then applied to 17 OECD countries (all but 2 signatories of the Bologna Process). A composite ranging is produced, as well as rankings on each of the criteria. So what were the outcomes for the higher education systems of these 17 countries?

Drawing upon all 6 criteria, a composite figure of USR is then produced. Australia is ranked 1st; the UK 2nd and Denmark 3rd, whilst Austria and Spain are ranked 16th and 17th respectively (see Table1 below). We can also see rankings based on specific criteria (Table 2 below).

thelisboncouncil1

thelisboncouncil2

There is much to be said for this intervention by The Lisbon Council – not the least being that it opens up debates about the role and purposes of universities. Over the past few months there have been numerous heated public interventions about this matter – from whether universities should be little more than giant patenting offices to whether they should be managers of social justice systems.

And though there are evident shortcomings (such as the lack of clarity about what might count as a university; the view that a university-based education is the most suitable form of education to produce a knowledge-based economy and society; what is the equity/access etc range within any one country, and so on), the USR does, at least, place issues like ‘lifelong learning’, ‘access’ and ‘inclusion’ on the reform agenda for universities across Europe. It also sends a message that it has a set of values that currently are not reflected in the two key ranking systems that it would like to advance.

However, the big question now is whether universities will see value in this kind of ranking system for its wider systemic, as opposed to institutional, possibilities, even if it is as a basis for discussing what are universities for and how might we produce more equitable knowledge societies and economies.

Susan Robertson and Roger Dale

New 2008 Shanghai rankings, by rankers who also certify rankers

Benchmarking, and audit culture more generally, are clearly the issues of the week. Following our coverage of a new Standard and Poor’s credit rating report regarding UK universities (‘Passing judgment’: the role of credit rating agencies in the global governance of UK universities‘), the Chronicle of Higher Education just noted that the 2008 Academic Ranking of World Universities (ARWU) (published by Shanghai Jiao Tong University) has been released on the web.

We’ve had more than a few stories about the pros and cons of rankings (e.g., 19 November’s  ‘University rankings: deliberations and future directions‘), but, of course, curiosity killed the cat so I eagerly plunged in for a quick scan.

Leaving aside the individual university scale, one of the most interesting representations of the data they collected, suspect though it might be, is this one:

The geographies, especially the disciplinary/field geographies, are noteworthy on a number of levels. The results are sure to propel the French (currently holding the rotating presidency of the Council of the European Union) into further action re., the deconstruction of the Shanghai methodology, and the development of alternatives (see my reference to this issue in the 6 July entry titled ‘Euro angsts, insights and actions regarding global university ranking schemes’).

I’m also not sure we can rely upon the recently established IREG-International Observatory on Academic Ranking and Excellence to shed unbiased light on the validity of the above table, and all the rest that are sure to be circulated, at the speed of light, through the global higher ed world over the next month or more. Why? Well, the IREG-International Observatory on Academic Ranking and Excellence, established on 18 April 2008, is supposed to:

review the conduct of “academic ranking” and expressions of “academic excellence” for the benefit of higher education, its stake-holders and the general public. This objective will be achieved by way of:

  • improving the standards, theory and practice in line with recommendations formulated in the Berlin Principles on Ranking of Higher Education Institutions;
  • initiating research and training related to ranking excellence;
  • analyzing the impact of ranking on access, recruitment trends and practices;
  • analyzing the role of ranking on institutional behavior;
  • enhancing public awareness and understanding of academic work.

Answering the explicit request of ranking bodies, the Observatory will review and assess selected rankings, based on methodological criteria and deontological standards of the Berlin Principles on Ranking of Higher Education Institutions. Successful ranking will be entities to declare they are “IREG Recognized”.

Now, who established the IREG-International Observatory on Academic Ranking and Excellence? A variety of ‘experts’ (photo below), including people associated with said Shanghai rankings, as well as U.S. News & World Report.

Forgive me if I am wrong, but is it not illogical, best intentions aside, to have rankers themselves on boards of institutions that seek to review “the conduct of ‘academic ranking’ and expressions of ‘academic excellence’ for the benefit of higher education, its stake-holders and the general public”, while also handing out IREG Recognized certifications (including to themselves, I presume)?

Kris Olds

‘Passing judgment’: the role of credit rating agencies in the global governance of UK universities

This week, one of the two major credit rating agencies in the world, Standard & Poor’s (Moody’s is the other), issued their annual ‘Report Card’ on UK universities. This year’s version is titled UK Universities Enjoy Higher Revenues but Still Face Spending Pressures and it has received a fair bit of attention in media outlets (e.g., the Financial Times and The Guardian). Our thanks to Standard and Poor’s for sending us a copy of the report.

Five UK universities were in the spotlight after having their creditworthiness rated by Standard & Poor’s (S&P’s). In total, S&P’s assesses 20 universities in the UK (5 are made public, the rest are confidential), with 90% of this survey considered by the rating agency to be of high investment grade quality (of A- or above).

Universities in the UK, it would appear from S&P’s Report Card, have had a relatively good year from ‘a credit perspective’. This pronouncement is surely something to celebrate in a year when the word ‘credit crunch’ has become the new metaphor for economic meltdown, and when higher education institutions are likely to be worried about the affects of the sub-prime mortgage lending crisis on loans to students and institutions more generally.

But to the average lay person (or even the average university professor), with a generally low level of financial literacy, what does this all mean? Global ratings agencies passing judgments on UK universities, or policies to drive the sector more generally, or, finally, individual institutional governance decisions?

Three years ago, when one of us (Susan) was delivering an Inaugural Professorial Address at Bristol, S&P’s 2005 report on Bristol (AA/Stable/–) was flashed up, much to the amusement of the audience though to the bemusement of the Chair, a senior university leader. The mild embarrassment of the Chair was largely a consequence of the fact that he was unaware of this judgment on Bristol by a credit rating agency headquartered in New York.

Now the reason for showing S&P’s judgment on the University of Bristol was neither to amuse the audience nor to embarrass the Chair. The point at the time was to sketch out the changing landscape of globalizing education systems within the wider global political economy, to introduce some of the newer (and more private) players who increasingly wield policymaking/shaping power on the sector, to reflect on how these agencies work, and to delineate some of the emerging effects of such developments on the sector.

Our view is that current analyses of globalizing higher education have neglected the role of credit rating agencies in the governance of the higher education sector—as specialized forms of intelligence gathering, shaping and judgment determination on universities. Yet, credit rating agencies are, in many ways, at the heart of contemporary global governance. Witness, for example, the huge debates going on now about establishing a European register for ratings agencies.

The release, then, this week of the S&P’s UK Universities 2008 Report Card, is an opportunity for GlobalHigherEd to sketch out to interested readers a basic understanding of global rating agencies and their relationship to the global governance of higher education.

Rating agencies – origins

Timothy Sinclair, a University of Warwick academic, has been writing for more than a decade on rating agencies and their roles in what he calls the New Global Finance (NGF) (Sinclair, 2000). His various articles and books (see, for example, Sinclair 1994; 2000; 2003; 2005)—some of which are listed below—are worth reading for those of you who want to pursue the topic in greater depth.

Sinclair outlines the early development and subsequent growing importance of credit rating agencies—the masters of capital and second superpowers—arguing that there have been a number of distinct phases in their development.

The first phase dates back to the 1850s, when compendiums of information were produced for American financial markets about large industrial infrastructure developments, such as railroads and canals. However, it was not until the 1907 financial crisis that these early compendiums of information were then used to make judgements about the creditworthiness of debtors (Sinclair, 2003: 148).

‘Rating’ then entered a period of rapid growth from the mid-1930s onwards, as a result of state governments in the US incorporating rating standards into their prudential rules for investment by pension funds.

A third phase began in the 1980s, when new financial innovations (particularly low-rated or junk bonds) were developed, and cheaper offshore non-national money markets were created (that is, places where funds are raised by selling debt obligations and equity outside of the current constraints of government regulation).

However this process, of what Sinclair (1994: 136) calls the ‘disintermediation’ of financing (meaning state regulatory bodies are side-stepped), creates information problems for those wishing to lend money and those wishing to borrow it.

The current phase is now characterized by, on the one hand, greater internationalization of finance, and on the other hand hand, increased significance of capital markets that challenge the role of Banks, as intermediaries.

Credit rating agencies have, as a result, become more important as suppliers of the information with which to make credit-worthiness judgments.

New York-based rating agencies have grown rapidly since then, responding to innovations in financial instruments, on the one hand, and the need for information, on the other. Demand for information has also generated competition within the industry, with some firms operating niche specializations – for instance, as we see with Standards & Poor’s and the higher education sector, itself a subsidiary of publishers McGraw Hill,

Credit rating is big, big business. As Sinclair (2005) notes, the two major credit rating agencies, Moody’s and Standards & Poor’s, pass judgments on around a $30 trillion worth of securities each year. Ratings also affect rates or costs of borrowing, so that the higher the rating, the less risk of default on repayment to the lender and therefore the lower the cost to the borrower.

Universities with different credit ratings will, therefore, be differently placed to borrow – so that the adage of ‘the more you have the more you get’ becomes a major theme.

The rating process

If we look at the detail of the ‘issuer credit rating’ and ‘comments’ in the Report Card of, for instance, the University of Bristol, or King’s College London, we can see that detail is gathered on the financial rating of the issuer; on the industry, competitors, and economy; on legal advice related to the specific issue; on management, policy, business outlook, accounting practices and so on; and on the competitive position, quality of management, long term industry prospects, and wider economic environment. As Sinclair (2003: 150) notes:

The rating agencies are most interested in data on cash flow relative to debt service obligations. They want to know how liquid the company is, and where there will be timely problems likely to hinder repayment. Other information may include five-year financial projections, including income statements and balance sheets, analysis of capital spending plans, financing alternatives, and contingency plans. This information which may not be publicly known is supplemented by agency research into the value of current outstanding obligations, stock valuations and other publicly available data that allows for an inference…

The rating that follows – an opinion on creditworthiness—is generated by an analytical team, a report is prepared with the rating and rationale, this is put to the rating committee made up of senior officials, and a final determination is made in private. The decision is subject to appeal by the issuer. Issuer credit ratings can be either long or short term. S&P use the following nomenclature for long term issue credit ratings (see Bankers Almanac, 2008: 1- 3):

  • AAA – (highest/ extremely strong capacity to meet financial commitments
  • AA – very strong capacity to meet financial commitments
  • A – strong capacity to meet financial commitments, but susceptible to adverse affects of changes in circumstances and economic conditions
  • BBB – adequate capacity to meet financial commitments
  • BB – less vulnerable in the near term than other lower rated obligators, but faces major ongoing uncertainties
  • B – more vulnerable than BB – but adverse business, financial or economic conditions will likely impair obligator’s capacity to meet its financial commitments

Rating higher education institutions

In light of the above discussion, we can now look more closely at the kinds of judgments passed on those universities included in a typical Report Card on the sector by Standards & Poor’s (see 2008: 7).

The 2008 Report Card itself is short; a 9 page document which offers a ‘credit perspective’ on the sector more generally, and on 5 universities. We are told “the UK higher education sector has made positive strides over the past few years, but faces increasing risks in the medium-to-long term” (p. 2).

The Report goes on to note a trebling of tuition fees in the UK, the growth the overseas student market and associated income, an increase in research income for research intensive universities – so that of the 5 universities rated, 1 has been upgraded, another has had its outlook revised to ‘positive’, and no ratings were adjusted for the other three.

The Report also notes (p. 2) that the universities publicly rated by S&P’s are among the leading universities in the UK. To support this claim they refer to another ranking mechanism that is now providing information in the global marketplace – The Times Higher QS World Universities Rankings 2007, which is, as we have noted in a recent entry (‘Euro angsts‘), receiving considerable critical attention in Europe.

However, the Report Card also notes pressures within the system: higher wage demands linked to tuition increases, the search for new researchers to be counted as part of the UK’s Research Assessment Exercise (RAE), global competition for international students, and the heightened expectations of students for better infrastructure as a result of higher fees.

Longer term risks include the fact that by 2020, there will be 16% fewer 18 year olds coming through the system, according to forecasts by Universities UK – with the biggest impact being on the newer universities (in the UK these so-called ‘newer universities’ are previous polytechnics who were given university status in 1992).

Of the 20 UK universities rated in this S&P’s Report, 4 universities are rated AAA; 8 are rated AA; 6 are rated A, and 2 are rated BBB. The University of Bristol, as we can see from the analysts’ rating and comments which we have reproduced below, is given a relatively favorable rating. We have also quoted this rating at length to give you a sense of the kind of commentary made and how this relates to the judgment passed.


Credit rating agencies, as instruments of the global governance of higher education

Credit rating agencies are particularly powerful because both markets and governments see them as authoritative sources of judgment, with the result that they are major actors in controlling access to capital markets. And despite the evident importance of credit rating agencies on the governance of universities in the UK and elsewhere, there is a remarkable lack of attention to this phenomenon. We think there are important questions that need to be researched and the results discussed more widely. For example:

  • How widely spread is the practice?
  • Why are some universities rated whilst others are not?
  • Why are some universities’ ratings considered confidential whilst others are not (keeping in mind that they are all, in the above UK case, public taxpayer supported universities)?
  • Have any universities contested their credit rating, and if so, through what process, and with what outcome?
  • How do university’s management systems respond to these credit ratings, and in what ways might they influence ongoing policy decisions within the university and within the sector?
  • How robust are particular kinds of reputational or status ‘information’, such as World University Rankings, especially if we are looking at creditworthiness?

Our reports on these global rankings show that there are major problems with such measures. As we have profiled, and as has University Ranking Watch and the Beerkens’ Blog, there are clearly unresolved debates and major problems with global ranking schemes.

Clearly market liberalism, of the kind that has characterized this current period of globalization, requires new kinds of intermediaries to provide information for both buyer and seller. And it cannot hurt to have ‘outside’ assessments of the fiscal health of institutions (in this case universities) that are complex, often opaque, and taxpayer supported. However, to experts like Timothy Sinclair (2003), credit rating agencies privatize policymaking, and they can narrow the sphere of government intervention.

For EU Internal Market Commissioner, Charlie McCreevy, the credit ratings agencies like Moody’s and S&P’s contributed to the current financial market turmoil because they underestimated the risks related to their structured credit products. As the Commissioner commented in EurActiv in June.: “No supervisor appears to have got as much as a sniff of the rot at the heart of the structured finance rating process before it all blew up.”

In other words, credit rating agencies lack political accountability and enjoy an ‘accountability gap’. And while efforts are now under way by regulators to close that gap by developing new regulatory frameworks and rules, analysts worry that these private actors will now find new ways around the rules, and in turn facilitate the creation of a riskier financial architecture (as happened with global mortgage markets).

As universities become more financialized, as well as ranked, indexed and barometered in the ways we have been mapping on GlobalHigherEd, such ‘information’ on the sector will also likely be deployed to pass judgment and generate ratings and rankings of ‘creditworthiness’ for universities. The net effect may well be to exaggerate the differences between institutions, to generate greater levels of uneven development within and across the sector, and to increase rather then decrease the opacity and therefore accountability of the sector.

In sum, there is little doubt credit rating agencies, in passing judgments, play a key and increasingly important role in the global governance of higher education. It is also clear from these developments that we need to pay much closer attention to what might be thought of as mundane entities – credit rating agencies – and their role in the global governance of higher education. And we are also hopeful that credit ratings agencies will outline their views on this important dimension of the small g governance of higher education institutions.

Selected References

Bankers Almanac (2008) Standards and Poor’s Definitions, last accessed 5 August 2008.

King, M. and Sinclair, T. (2003) Private actors and public policy: a requiem for the new Basel Capital Accord, International Political Science Review, 24 (3), pp. 345-62.

Sinclair, T. (1994) Passing judgement: credit rating processes as regulatory mechanisms of governance in the emerging world order, Review of International Political Economy, 1 (1), pp. 133-159.

Sinclair, T. (2000) Reinventing authority: embedded knowledge networks and the new global finance, Environment and Planning C: Government and Policy, August 18 (4), pp. 487-502.

Sinclair, T. (2003) Global monitor: bond rating agencies, New Political Economy, 8 (1), pp. 147-161.

Sinclair, T. (2005) The New Masters of Capital: American Bond Rating Agencies and the Politics of Creditworthiness, New York: Cornell University Press.

Standard & Poor’s (2008) Report Card: UK Universities Enjoy Higher Revenues But Still Face Spending Pressures, London: Standards & Poor’s.

Susan Robertson and Kris Olds

Euro angsts, insights and actions regarding global university ranking schemes

The Beerkens’ blog noted, on 1 July, how the university rankings effect has even gone as far as reshaping immigration policy in the Netherlands. He included this extract, from a government policy proposal (‘Blueprint for a modern migration policy’):

Migrants are eligible if they received their degree from a university that is in the top 150 of two international league tables of universities. Because of the overlap, the lists consists of 189 universities…

Quite the authority being vetted in ranking schemes that are still in the process of being hotly debated!

On this broad topic, I’ve been traveling throughout Europe this academic year, pursuing a project not related to rankings, yet again and again rankings come up as a topic of discussion, reminding us of the de-facto global governance power of rankings (and the rankers). Ranking schemes, especially the Shanghai Jiao Tong University’s Academic Ranking of World Universities, and The Times Higher-QS World University Rankings are generating both governance impacts, and substantial anxiety, in multiple quarters.

In response, the European Commission is funding some research and thinking on the topic, while France’s new role in the rotating EU Presidency is supposed to lead to some further focus and attention over the next six months. More generally, here is a random list of European or Europe-based initiatives to examine the nature, impacts, and politics of global rankings:

And here are some recent or forthcoming events:

Yet I can’t help but wonder why Europe, which generally has high quality universities, despite some significant challenges, did not seek to shed light on the pros and cons of the rankings phenomenon any earlier. In other words, despite the critical mass of brainpower in Europe, what has hindered a collective, integrated, and well-funded interrogation of the ranking schemes from emerging before the ranking effects and path dependency started to take hold? Of course there was plenty of muttering, and some early research about rankings, and one could argue that I am viewing this topic through a rear view mirror, but Europe was, arguably, somewhat late in digging into this topic considering how much of an impact these assessment cum governance schemes are having.

So, if absence matters as much as presence in the global higher ed world, let’s ponder the absence of a serious European critique, or at least interrogation of, rankings and the rankers, until now. Let me put forward four possible explanations.

First, action at a European higher education scale has been focused upon bringing the European Higher Education Area to life via the Bologna Process, which was formally initiated in 1999. Thus there were only so many resources – intellectual and material – that could be allocated to higher education, so the Europeans are only now looking outwards to the power of rankings and the rankers. In short, key actors with a European higher education and research development vision have simply been too busy to focus on the rankings phenomenon and its effects.

A second explanation might be that European stakeholders are, deep down, profoundly uneasy about competition with respect to higher education, of which benchmarking and ranking is a part. But, as the Dublin Institute of Technology’s Ellen Hazelkorn notes in Australia’s Campus Review (27 May 2008):

Rankings are the latest weapon in the battle for world-class excellence. They are a manifestation of escalating global competition and the geopolitical search for talent, and are now a driver of that competition and a metaphor for the reputation race. What started out as an innocuous consumer product – aimed at undergraduate domestic students – has become a policy instrument, a management tool, and a transmitter of social, cultural and professional capital for the faculty and students who attend high-ranked institutions….

In the post-massification higher education world, rankings are widening the gap between elite and mass education, exacerbating the international division of knowledge. They inflate the academic arms race, locking institutions and governments into a continual quest for ever increasing resources which most countries cannot afford without sacrificing other social and economic policies. Should institutions and governments allow their higher education policy to be driven by metrics developed by others for another purpose?

It is worth noting that Ellen Hazelkorn is currently finishing an OECD-sponsored study on the effects of rankings.

In short, institutions associated with European higher education did not know how to assertively critique (or at least interrogate) ranking schemes as they never realized, until more recently, how ranking schemes are deeply geopolitical and geoeconomic vehicles that enable the powerful to maintain their standing, and harness yet even more resources inward. Angst regarding competition dulled senses to the intrinsically competitive logic of global university ranking schemes, and the political nature of their being.

Third, perhaps European elites, infatuated as they are with US Ivy League universities, or private institutions like Stanford, just accepted the schemes for the results summarized in this table from an OECD working paper (July 2007) written by Simon Marginson and Marijk van der Wende:

for they merely reinforced their acceptance of one form of American exceptionalism that has been acknowledged in Europe for some time. In other words, can one expect critiques of schemes that identify and peg, at the top, universities that many European elites would kill to send their children to, to emerge? I’m not so sure. As with Asia (where I worked from 1997-2001), and now in Europe, people seem infatuated with the standing of universities like Harvard, MIT, and Princeton, but these universities really operate in a parallel universe. Unless European governments, or the EU, are willing to establish 2-3 universities like King Abdullah University of Science and Technology (KAUST) in Saudi Arabia recently did with a $10 billion endowment, then angling to compete with the US privates should just be forgotten about. The new European Institute of Innovation and Technology (EIT) innovative as it may become, will not rearrange the rankings results, assuming they should indeed be rearranged.

Following what could be defined as a fait accompli phase, national and European political leaders came to progressively view the low status of European universities in the two key rankings schemes – Shanghai, and Times Higher – as a problematic situation. Why? The Lisbon Strategy emerges in 2000, was relaunched in 2005, and slowly starts to generate impacts, while also being continually retuned. Thus, if the strategy is to “become the most competitive and dynamic knowledge-based economy in the world, capable of sustainable economic growth with more and better jobs and greater social cohesion”, how can Europe become such a competitive global force when universities – key knowledge producers – are so far off fast emerging and now hegemonic global knowledge production maps?

In this political context, especially given state control over higher education budgets, and the relaunched Lisbon agenda drive, Europe’s rankers of ranking schemes were then propelled into action, in trebuchet-like fashion. 2010 is, after all, a key target date for a myriad of European scale assessments.

Fourth, Europe includes the UK, despite the feelings of many on both sides of the Channel. Powerful and well-respected institutions, with a wealth of analytical resources, are based in the UK, the global centre of calculation regarding bibliometrics (which rankings are a part of). Yet what role have universities like Oxford, Cambridge, Imperial College, UCL, and so on, or stakeholder organizations like Universities UK (UUK) and the Higher Education Funding Council for England (HEFCE), played in shedding light on the pros and cons of rankings for European institutions of higher education? I might be uninformed but the critiques are not emerging from the well placed, despite their immense experience with bibliometrics. In short as rankings aggregate data that works at a level of abstraction that hoves universities into view, and places UK universities highly (up there with Yale, Harvard and MIT), then these UK universities (or groups like UUK) will inevitably be concerned about their relative position, not the position of the broader regional system of which they are part, nor the rigour of the ranking methodologies. Interestingly, the vast majority of the above initiatives I listed only include representatives from universities that are ranked relatively low by the two main ranking schemes that now hold hegemonic power. I could also speculate on why the French contribution to the regional debate is limited, but will save that for another day.

These are but four of many possible explanations for why European higher education might have been relatively slow to grapple with the power and effects of university ranking schemes considering how much angst and impacts they generate. This said, you could argue, as Eric Beerkens has in the comments section below, that the European response was actually not late off the mark, despite what I argued above. The Shanghai rankings emerged in June 2003, and I still recall the attention they generated when they were first circulated. Three to five years for sustained action in some sectors is pretty quick, while in some sectors it is not.

In conclusion, it is clear that Europe has been destabilized by an immutable mobile – a regionally and now globally understood analytical device that holds together, travels across space, and is placed in reports, ministerial briefing notes, articles, PPT presentations, newspaper and magazine stories, etc. And it is only now that Europe is seriously interrogating the power of such devices, the data and methodologies that underly their production, and the global geopolitics and geoeconomics that they are part and parcel of.

I would argue that it is time to allocate substantial European resources to a deep, sustained, and ongoing analysis of the rankers, their ranking schemes, and associated effects. Questions remain, though, about how much light will be shed on the nature of university rankings schemes, what proposals or alternatives might emerge, and how the various currents of thought in Europe converge or diverge as some consensus is sought. Some institutions in Europe are actually happy that this ‘new reality’ has emerged for it is perceived to facilitate the ‘modernization’ of universities, enhance transparency at an intra-university scale, and elevate the role of the European Commission in European higher education development dynamics. Yet others equate rankings and classification schema with neoliberalism, commodification, and Americanization: this partly explains the ongoing critiques of the typology initiatives I linked to above, which are, to a degree, inspired by the German Excellence initiative, which is in turn partially inspired by a vision of what the US higher education system is.

Regardless, the rankings topic is not about to disappear. Let us hope that the controversies, debates, and research (current and future) inspire coordinated and rigorous European initiatives that will shed more light on this new form of defacto global governance. Why? If Europe does not do it, no one else will, at least in a manner that recognizes the diverse contributions that higher education can and should make to development processes at a range of scales.

Kris Olds

23 July update: see here for a review of a 2 juillet 2008 French Senate proposal to develop a new European ranking system that better reflects the nature of knowledge production (including language) in France and Europe more generally.  The full report (French only) can be downloaded here, while the press release (French only) can be read here.  France is, of course, going to publish a Senate report in French, though the likely target audience for the broader message (including a critique of the Shanghai Jiao Tong University’s Academic Ranking of World Universities) only partially understands French.  Yet in some ways it would have been better to have the report released simultaneously in both French and English.  But the contradictions of France critiquing dominant ranking schemes for their bias towards the English language, in English, was likely too much to take. In the end though, the French critique is well worth considering, and I can’t help but think that the EU or one of the many emerging initiatives above would be wise to have the report immediately translated and placed on some relevant websites so that it can be downloaded for review and debate.

Thomson Reuters, China, and ‘regional’ journals: of gifts and knowledge production

Numerous funding councils, academics, multilateral organizations, media outlets, and firms, are exhibiting enhanced interest in the evolution of the Chinese higher education system, including its role as a site and space of knowledge production. See these three recent contributions, for example:

It is thus noteworthy that the “Scientific business of Thomson Reuters” (as they are now known) has been seeking to position itself as a key analyst of the changing contribution of China-based scholars to the global research landscape. As anyone who has worked in Asia knows, the power of bibliometrics is immense, and quickly becoming more so, within the relevant governance systems that operate across the region. The strategists at Scientific clearly have their eye on the horizon, and are laying the foundations for a key presence in future of deliberations about the production of knowledge in and on China (and the Asia-Pacific more generally).

Thomson and the gift economy

One of the mechanisms to establish a presence and effect is the production of knowledge about knowledge (in this case patents and ISI Web of Science citable articles), as well as gifts. On the gift economy front, yesterday marked the establishment of the first ‘Thomson Reuters Research Fronts Award 2008’, which was jointly sponsored Thomson Reuters and the Chinese Academy of Sciences (CAS) “Research Front Analysis Center”, National Science Library. The awards ceremony was held in the sumptuous setting of the Hotel Nikko New Century Beijing.

As the Thomson Reuters press release notes:

This accolade is awarded to prominent scientific papers and their corresponding authors in recognition of their outstanding pioneering research and influential contribution to international research and development (R&D). The event was attended by over 150 of the winners’ industry peers from leading research institutions, universities and libraries.

The award is significant to China’s science community as it accords global recognition to their collaborative research work undertaken across all disciplines and institutions and highlights their contribution to groundbreaking research that has made China one of the world’s leading countries for the influence of its scientific papers. According to the citation analysis based on data from Scientific’s Web of Science, China is ranked second in the world by number of scientific papers published in 2007. [my emphasis]

Thomson incorporates ‘regional’ journals into the Web of Science

It was also interesting to receive news two days ago that the Scientific business of Thomson Reuters has just added “700 new regional journals” to the ISI Web of Science, journals that “typically target a regional rather than international audience by approaching subjects from a local perspective or focusing on particular topics of regional interest”. The breakdown of newly included journals is below, and was kindly sent to me by Thomson Reuters:

Scientific only admits journals that meet international standard publishing practices, and include notable elements of English so as to enable the data base development process, as noted here:

All journals added to the Web of Science go through a rigorous selection process. To meet stringent criteria for selection, regional journals must be published on time, have English-language bibliographic information (title, abstract, keywords), and cited references must be in the Roman alphabet.

In a general sense, this is a positive development; one that many regionally-focused scholars have been crying out for for years. There are inevitably some issues being grappled with about just which ‘regional’ journals are included, the implications for authors and publishers to include English-language bibliographic information (not cheap on a mass basis), and whether it really matters in the end to a globalizing higher education system that seems to be fixated on international refereed (IR) journal outlets. Still, this is progress of a notable type.

Intellectual Property (IP) generation (2003-2007)

The horizon scanning Thomson Reuters is engaged in generates relevant information for many audiences. For example, see the two graphics below, which track 2003-2007 patent production rates and levels within select “priority countries”. The graphics are available in World IP Today by Thomson Reuters (2008). Click on them for a sensible (for the eye) size.

Noteworthy is the fact that:

China has almost doubled its volume of patents from 2003-2007 and will become a strong rival to Japan and the United States in years to come. Academia represents a key source of innovation in many countries. China has the largest proportion of academic innovation. This is strong evidence of the Chinese Government’s drive to strengthen its academic institutions

Thus we see China as a rapidly increasing producer of IP (in the form of patents), though in a system that is relatively more dependent upon its universities to act as a base for the production process. To be sure private and state-owned enterprises will become more significant over time in China (and Russia), but the relative importance of universities (versus firms or research-only agencies) in the knowledge production landscape is to be noted.

Through the production of such knowledge, technologies, and events, the Scientific business of Thomson Reuters seeks to function as the key global broker of knowledge about knowledge. Yet the role of this institution in providing and reshaping the architecture that shapes ever more scholars’ careers, and ever more higher education systems, is remarkably under-examined.

Kris Olds

ps: alas GlobalHigherEd is still being censored out in China as we use a WordPress.com blogging platform and the Chinese government is blanket censoring all WordPress.com blogs. So much for knowledge sharing!

The ‘other GATS negotiations’: domestic regulation and norms

In our previous entries (here and here) in GlobalHigherEd we introduced the World Trade Organization (WTO) and explained the content and implications of the liberalization negotiation within the General Agreement on Trade in Services (GATS). The liberalization negotiation is the most well known activity within the scope of GATS. In fact, very often the GATS and education literature restricts the content of the agreement to its liberalization disciplines (that is, market access and national treatment).

However, other negotiations that are equally relevant to the future of higher education are also taking place, and specifically the negotiations on Domestic Regulation (DR) and Norms.

Discussion on these topics takes place as the logical consequence of the fact that the GATS is an incomplete agreement. In the Uruguay Round, the GATS was designed and signed, but member countries did not reach a consensus in sensitive issues, such as Domestic Regulation (Article VI) and the so-called Norms (Articles X, XIII and XV). So, after Uruguay, two working groups – composed by all WTO member countries – were established with the objective of concluding these articles.

Domestic regulation negotiations
Article VI establishes that the national regulation cannot block the “benefits derived from the GATS” and calls member countries to elaborate disciplines and procedures that contribute to identify those national regulations that states’ impose on foreign services providers that are ‘more burdensome than necessary’. The regulations in question include those associated with:

  • qualification issues (for instance, certificates that are required by education services providers),
  • technical standards (which can be related to quality assurance mechanisms), and
  • licensing requirements (which, in some countries and sectors might refer to conditions and benchmarks on access to the service).

One of the procedures that is being discussed in the framework of the Working Group on DR is a polemical ‘necessity test’. If this instrument is approved, Member States will have to demonstrate, if asked, that certain regulatory measures are totally necessary to achieve certain aims, and that they could not apply any other less trade-restrictive alternative.

Rules
In the framework of the Working Group on Rules, three issues are being discussed:

  • Emergency Safeward Mechanisms (Article X): These mechanisms, when settled, would permit to countries to retrieve some liberalization commitments – without receiving any sanction – in case that it can be demonstrated that the liberalization experience has had very negative effects. Southern countries are more interested in the achievement of strong mechanisms, while developed countries pushes for softer disciplines.
  • Government procurement (Article XIII): The Working Group examines how government procurement could be inserted in the GATS framework. Therefore, transnational services corporations could become public procurement bidders in foreign countries. Developed countries are most interested in strong disciplines in relation to this rule.
  • Subsidies (Article XV): In this case, Members are elaborating disciplines to avoid the “distortion to trade” provoked by subsidies.

DR and Rules negotiations are different to the liberalization negotiations in the sense that the former are not developed progressively (i.e. round after round). On the one hand, once each country reaches an agreement, consecutive negotiations on these areas will not be necessary. On the other hand, DR and Norms affect all sectors indiscriminately because, in contrast to liberalization negotiations, they are not negotiated sector by sector.

The outcome of the Working Groups on DR and Rules will thus modify the balance between the legitimate capacity of the states to prosecute certain social objectives (for instance, in relation to the access and quality of public services such as education) and the obligation to guarantee a free trade environment for transnational services providers.

Given the importance of these ‘other’ negotiations in the GATS, our view is that the education community should make sure that they also keep a watchful eye on them. GlobalHigherEd readers might find the information in the periodic publication TradeEducation News, launched by Education International, a useful way of doing this.

Antoni Verger and Susan Robertson