QS.com Asian University Rankings: niches within niches…within…

QS Asia 3Today, for the first time, the QS Intelligence Unit published their list of the top 100 Asian universities in their QS.com Asian University Rankings.

There is little doubt that the top performing universities have already added this latest branding to their websites, or that Hong Kong SAR will have proudly announced it has three universities in the top 5 while Japan has 2. QS Asia 2

QS.com Asian University Rankings is a spin-out from the QS World University Rankings published since 2005.  Last year, when the 2008 QS World University Rankings was launched, GlobalHigherEd posted an entry asking:  “Was this a niche industry in formation?”  This was in reference to strict copyright rules invoked – that ‘the list’ of decreasing ‘worldclassness’ could not be displayed, retransmitted, published or broadcast – as well as acknowledgment that rankings and associated activities can enable the building of firms such as QS Quacquarelli Symonds Ltd.

Seems like there are ‘niches within niches within….niches’ emerging in this game of deepening and extending the status economy in global higher education.  According to the QS Intelligence website:

Interest in rankings amongst Asian institutions is amongst the strongest in the world – leading to Asia being the first of a number of regional exercises QS plans to initiate.

The narrower the geographic focus of a ranking, the richer the available data can potentially be – the US News & World Report draws on 18 indicators, the Joong Ang Ilbo ranking in Korea on over 30. It is both appropriate and crucial then that the range of indicators used at a regional level differs from that used globally.

The objectives of each exercise are slightly different – whilst a global ranking seeks to identify truly world class universities, contributing to the global progress of science, society and scholarship, a regional ranking should adapt to the realities of the region in question.

Sure, the ‘regional niche’ allows QS.com to package and sell new products to Asian and other universities, as well as information to prospective students about who is regarded as ‘the best’.

However, the QS.com Asian University Rankings does more work than just that.  The ranking process and product places ‘Asian universities’ into direct competition with each other, it reinforces a very particular definition of ‘Asia’ and therefore Asian regionalism, and it services an imagined emerging Asian regional education space.

All this, whilst appearing to level the playing field by invoking regional sentiments.

Susan Robertson

CRELL: critiquing global university rankings and their methodologies

This guest entry has been kindly prepared for us by Beatrice d’Hombres and Michaela Saisana of the EU-funded Centre for Research on Lifelong Learning (CRELL) and Joint Research Centre. This entry is part of a series on the processes and politics of global university rankings (see herehere, here and here).

beatriceSince 2006, Beatrice d’Hombres has been working in the Unit of Econometrics and Statistics of the Joint Research Centre of  the European Commission. She is part of the Centre for Research on Lifelong Learning. Beatrice is an economist who completed a PhD at the University of Auvergne (France). She has a particular expertise in education economics and applied econometrics.

michaela

Michaela Saisana works for the Joint Research Centre (JRC) of the European Commission at the Unit of Econometrics and Applied Statistics. She has a PhD in Chemical Engineering and in 2004 she won the European Commission – JRC Young Scientist Prize in Statistics and Econometrics for her contribution on the robustness assessment of composite indicators and her work on sensitivity analysis.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The expansion of the access to higher education, the growing mobility of students, the need for economic rationale behind the allocation of public funds, together with the demand for higher accountability and transparency, have all contributed to raise the need for comparing university quality across countries.

The recognition of this fact has also been greatly stirred  by the publication, since 2003, of the ‘Shanghai Jiao Tong University Academic Ranking of World Universities’ (henceforth SJTU), which measures university research performance across the world. The SJTU ranking tends to reinforce the evidence that the US is well ahead of Europe in terms of cutting-edge university research.

Its rival is the ranking computed annually, since 2004, by the Times Higher Education Supplement (henceforth THES). Both these rankings are now receiving worldwide attention and constitute an occasion for national governments to comment on the relative performances of their national universities.

In France, for example, the publication of the SJTU is always associated with a surge of articles in newspapers which either bemoan  the poor performance of French universities or denounce the inadequacy of the SJTU ranking to properly assess the attractiveness of the fragmented French higher education institutions landscape (see Les Echos, 7 August 2008).

Whether the intention of the rankers or not, university rankings have followed a destiny of their own and are used by national policy makers to stimulate debates about national university systems and ultimately can lead to specific education policies orientations.

At the same time, however, these rankings are subject to a plethora of criticism. They outline that the chosen indicators are mainly based on research performance with no attempt to take into account the others missions of universities (in particular teaching), and are biased towards large, English-speaking and hard-science institutions. Whilst the limitations of the indicators underlying the THES or the SJTU rankings have been extensively discussed in the relevant literature, there has been no attempt so far to examine in depth the volatility of the university ranks to the methodological assumptions made in compiling the rankings.

crell3The purpose of the JRC/Centre for Research on Lifelong Learning (CRELL) report is to fill in this gap by quantifying how much university rankings depend on the methodology and to reveal whether the Shanghai ranking serves the purposes it is used for, and if its immediate European alternative, the British THES, can do better.

To that end, we carry out a thorough uncertainty and sensitivity analysis of the 2007 SJTU and THES rankings under a plurality of scenarios in which we activate simultaneously different sources of uncertainty. The sources cover a wide spectrum of methodological assumptions (set of selected indicators, weighting scheme, and aggregation method).

This implies that we deviate from the classic approach – also taken in the two university ranking systems – to build a composite indicator by a simple weighted summation of indicators. Subsequently, a frequency matrix of the university ranks is calculated across the different simulations. Such a multi-modeling approach and the presentation of the frequency matrix, rather than the single ranks, allows one to deal with the criticism, often made to league tables and rankings systems ,that ranks are presented as if they were calculated under conditions of certainty while this is rarely the case.  crell

The main findings of the report are the following. Both rankings are only robust in the identification of the top 15 performers on either side of the Atlantic, but unreliable on the exact ordering of all other institutes. And, even when combining all twelve indicators in a single framework, the space of the inference is too wide for about 50 universities of the 88 universities we studied and thus no meaningful rank can be estimated for those universities. Finally, the JRC report suggests that THES and SJTU rankings should be improved along two main directions:

  • first, the compilation of university rankings should always be accompanied by a robustness analysis based on a multi-modeling approach. We believe that this could constitute an additional recommendation to be added to the already 16 existing Berlin Principles;
  • second, it is necessary to revisit the set of indicators, so as to enrich it with other dimensions that are crucial to assessing university performance and which are currently missing.

Beatrice d’Hombres  and Michaela Saisana

European ambitions: towards a ‘multi-dimensional global university ranking’

Further to our recent entries on European reactions and activities in relationship to global rankings schemes:

and a forthcoming guest contribution to SHIFTmag: Europe Talks to Brussels, ranking(s) watchers should examine this new tender for a €1,100,000 (maximum) contract for the ‘Design and testing the feasibility of a Multi-dimensional Global University Ranking’, to be completed by 2011.

dgecThe Terms of Reference, which hs been issued by the European Commission, Directorate-General for Education and Culture, is particularly insightful, while this summary conveys the broad objectives of the initiative:

The new ranking to be designed and tested would aim to make it possible to compare and benchmark similar institutions within and outside the EU, both at the level of the institution as a whole and focusing on different study fields. This would help institutions to better position themselves and improve their development strategies, quality and performances. Accessible, transparent and comparable information will make it easier for stakeholders and, in particular, students to make informed choices between the different institutions and their programmes. Many existing rankings do not fulfil this purpose because they only focus on certain aspects of research and on entire institutions, rather than on individual programmes and disciplines. The project will cover all types of universities and other higher education institutions as well as research institutes.

The funding is derived out of the Lifelong Learning policy and program stream of the Commission.

Thus we see a shift, in Europe, towards the implementation of an alternative scheme to the two main global ranking schemes, supported by substantial state resources at a regional level. It will be interesting to see how this eventual scheme complements and/or overturns the other global ranking schemes that are products of media outlets, private firms, and Chinese universities.

Kris Olds

International university rankings, classifications & mappings – a view from the European University Association

Source: European University Association Newsletter, No. 20, 5 December 2008.

Note: also see ‘Multi-scalar governance technologies vs recurring revenue: the dual logics of the rankings phenomenon

Multi-scalar governance technologies vs recurring revenue: the dual logics of the rankings phenomenon

Our most recent entry (‘University Systems Ranking (USR)’: an alternative ranking framework from EU think-tank‘) is getting heavy traffic these days, a sign that the rankings phenomenon just won’t go away.  Indeed there is every sign that debates about rankings will be heating up over the next 1-2 year in particular, courtesy of the desire of stakeholders to better understand rankings, generate ‘recurring revenue’ off of rankings, and provide new governance technologies to restructure higher education and research systems.

This said I continue to be struck, as I travel to selective parts of the world for work, by the diversity of scalar emphases at play.

eiffeleu1In France, for example, the broad discourse about rankings elevates the importance of the national (i.e., French) and regional (i.e., European) scales, and only then does the university scale (which I will refer to as the institutional scale in this entry) come into play in importance terms. This situation reflects the strong role of the national state in governing and funding France’s higher education system, and France’s role in European development debates (including, at the moment, presidency of the Council of the European Union).

In UK it is the disciplinary/field and then the institutional scales that matter most, with the institutional made up of a long list of ranked disciplines/fields. Once the new Research Assessment Exercise (RAE) comes out in late 2008 we will see the institutional assess the position of each of their disciplines/fields, which will then lead to more support or relatively rapid allocation of the hatchet at the disciplinary/field level. This is in part because much national government funding (via the Higher Education Funding Council for England (HEFCE), the Scottish Funding Council (SFC), the Higher Education Funding Council for Wales (HEFCW) and the Department for Employment and Learning, Northern Ireland (DEL)) to each university is structurally dependent upon the relative rankings of each university’s position in the RAE, which is the aggregate effect of the position of the array of fields/disciplines in any one university (see this list from the University of Manchester for an example). The UK is, of course, concerned about its relative place in the two main global ranking schemes, but it doing well at the moment so the scale of concern is of a lower order than most other countries (including all other European countries). Credit rating agencies also assess and factor in rankings with respect to UK universities (e.g. see ‘Passing judgment’: the role of credit rating agencies in the global governance of UK universities‘).

In the US – supposedly the most marketized of contexts – there is highly variably concern with rankings.  Disciplines/fields ranked by media outlets like U.S. News & World Report are concerned, to be sure, but U.S. News & World Report does not allocate funding. Even the National Research Council (NRC) rankings matter less in the USA given that its effects (assuming it eventually comes out following multiple delays) are more diffuse. The NRC rankings are taken note of by deans and other senior administrators, and also faculty, albeit selectively. Again, there is no higher education system in the US – there are systems. I’ve worked in Singapore, England and the US as a faculty member and the US is by far the least addled or concerned by ranking systems, for good and for bad.

While the diversity of ranking dispositions at the national and institutional levels is heterogeneous in nature, the global rankings landscape is continuing to change, and quickly. In the remainder of this entry we’ll profile but two dimensions of the changes.

Anglo-American media networks and recurrent revenue

ustheFirst, new key media networks, largely Anglo-American private sector networks, have become intertwined.  As Inside Higher Ed put it on 24 November:

U.S. News & World Report on Friday announced a new, worldwide set of university rankings — which is really a repackaging of the international rankings produced this year in the Times Higher Education-QS World University Rankings. In some cases, U.S. News is arranging the rankings in different ways, but Robert Morse, director of rankings at the magazine, said that all data and the methodology were straight from the Times Higher’s rankings project, which is affiliated with the British publication about higher education. Asked if his magazine was just paying for reprint rights, Morse declined to discuss financial arrangements. But he said that it made sense for the magazine to look beyond the United States. “There is worldwide competition for the best faculty, best students and best research grants and researchers,” he said. He also said that, in the future, U.S. News may be involved in the methodology. Lloyd Thacker, founder of the Education Conservancy and a leading critic of U.S. News rankings, said of the magazine’s latest project: “The expansion of a business model that has profited at the expense of education is not surprising. This could challenge leaders to distinguish American higher education by providing better indicators of quality and by helping us think beyond ranking.”

This is an unexpected initiative, in some ways, given that the Times Higher Education-QS World University Rankings are already available on line and US New and World Report is simply repackaging these for sale in the American market. Yet if you adopt a market-making perspective this joint venture makes perfect sense. Annual versions of the Times Higher Education-QS World University Rankings will be reprinted in a familiar (to US readers) format, thereby enabling London-based TSL Education Ltd., London/Paris/Singapore-based QS Quacquarelli Symonds, and Washington DC-based U.S. News and World Report to generate recurring revenue with little new effort (apart from repackaging and distribution in the US). The enabling mechanism is, in this case, reprint rights fees. As we have noted before, this is a niche industry in formation, indeed.

More European angst and action

And second, at the regional level, European angst (an issue we profiled on 6 July in ‘Euro angsts, insights and actions regarding global university ranking schemes‘) about the nature and impact of rankings is leading to the production of critical reports on rankings methodologies, the sponsorship of high powered multi-stakeholder workshops, and the emergence of new proposals for European ranking schemes.

ecjrccoverSee, for example, this newly released report on rankings titled Higher Education Rankings: Robustness Issues and Critical Assessment, which is published by the European Commission Joint Research Centre, Institute for the Protection and Security of the Citizen, Centre for Research on Lifelong Learning (CRELL)

The press release is here, and a detailed abstract of the report is below:

The Academic Ranking of World Universities carried out annually by the Shanghai’s Jiao Tong University (mostly known as the ‘Shanghai ranking’) has become, beyond the intention of its developers, a reference for scholars and policy makers in the field of higher education. For example Aghion and co-workers at the Bruegel think tank use the index – together with other data collected by Bruegel researchers – for analysis of how to reform Europe’s universities, while French President Sarkozy has stressed the need for French universities to consolidate in order to promote their ranking under Jiao Tong. Given the political importance of this field the preparation of a new university ranking system is being considered by the French ministry of education.

The questions addressed in the present analysis is whether the Jiao Tong ranking serves the purposes it is used for, and whether its immediate European alternative, the British THES, can do better.

Robustness analysis of the Jiao Tong and THES ranking carried out by JRC researchers, and of an ad hoc created Jiao Tong-THES hybrid, shows that both measures fail when it comes to assessing Europe’s universities. Jiao Tong is only robust in the identification of the top performers, on either side of the Atlantic, but quite unreliable on the ordering of all other institutes. Furthermore Jiao Tong focuses only on the research performance of universities, and hence is based on the strong assumption that research is a universal proxy for education. THES is a step in the right direction in that it includes some measure of education quality, but is otherwise fragile in its ranking, undeniably biased towards British institutes and somehow inconsistent in the relation between subjective variables (from surveys) and objective data (e.g. citations).

JRC analysis is based on 88 universities for which both the THES and Jiao Tong rank were available. European universities covered by the present study thus constitute only about 0.5% of the population of Europe’s universities. Yet the fact that we are unable to reliably rank even the best European universities (apart from the 5 at the top) is a strong call for a better system, whose need is made acute by today’s policy focus on the reform of higher education. For most European students, teachers or researchers not even the Shanghai ranking – taken at face value and leaving aside the reservations raised in the present study – would tell which university is best in their own country. This is a problem for Europe, committed to make its education more comparable, its students more mobile and its researchers part of a European Research Area.

Various attempts in EU countries to address the issue of assessing higher education performance are briefly reviewed in the present study, which offers elements of analysis of which measurement problem could be addressed at the EU scale. [my emphasis]

While ostensibly “European”, does it really matter that the Times Higher Education-QS World University Ranking is produced by firms with European headquarters, while the Jiao Tong ranking is produced by an institution based in China?

The divergent logics underlying the production of discourses about rankings are also clearly visible in two related statements. At the bottom of the European Commission’s Joint Research Centre report summarized above we see “Reproduction is authorised provided the source is acknowledged”, while the Times Higher Education-QS World University Rankings, a market-making discourse, is accompanied by a lengthy copyright warning that can be viewed here.

Yet do not, for a minute, think that ‘Europe’ does not want to be ranked, or use rankings, as much if not more than any Asian or American or Australian institution. At a disciplinary/field level, for example, debates are quickly unfolding about the European Reference Index for the Humanities (ERIH), a European Science Foundation (ESF) backed initiative that has its origins in deliberations about the role of the humanities in the European Research Area. The ESF frames it this way:

Humanities research in Europe is multifaceted and rich in lively national, linguistic and intellectual traditions. Much of Europe’s Humanities scholarship is known to be first rate. However, there are specifities of Humanities research, that can make it difficult to assess and compare with other sciences. Also,  it is not possible to accurately apply to the Humanities assessment tools used to evaluate other types of research. As the transnational mobility of researchers continues to increase, so too does the transdisciplinarity of contemporary science. Humanities researchers must position themselves in changing international contexts and need a tool that offers benchmarking. This is why ERIH (European Reference Index for the Humanities) aims initially to identify, and gain more visibility for top-quality European Humanities research published in academic journals in, potentially, all European languages. It is a fully peer-reviewed, Europe-wide process, in which 15 expert panels sift and aggregate input received from funding agencies, subject associations and specialist research centres across the continent. In addition to being a reference index of the top journals in 15 areas of the Humanities, across the continent and beyond, it is intended that ERIH will be extended to include book-form publications and non-traditional formats. It is also intended that ERIH will form the backbone of a fully-fledged research information system for the Humanities.

See here for a defense of this ranking system by Michael Worton (Vice-Provost, University College London, and a member of the ERIH steering committee).  I was particularly struck by this comment:

However, the aim of the ERIH is not to assess the quality of individual outputs but to assess dissemination and impact. It can therefore provide something that the RAE cannot: it can be used for aggregate benchmarking of national research systems to determine the international standing of research carried out in a particular discipline in a particular country.

Link here for a Google weblog search on this debate, while a recent Chronicle of Higher Education article (‘New Ratings of Humanities Journals Do More Than Rank — They Rankle’) is also worth reviewing.

Thus we see a new rankings initiative emerging to enable (in theory) Europe to better codify its highly developed humanities presence on the global research landscape, but in a way that will enable national (at the intra-European scale) peaks (and presumably) valleys of quality output to be mapped for the humanities, but also for specific disciplines/fields. Imagine the governance opportunities available, at multiple scales, if this scheme is operationalized.

And finally, at the European scale again, University World News noted, on 23 November, that:

The European Union is planning to launch its own international higher education rankings, with emphasis on helping students make informed choices about where to study and encouraging their mobility. Odile Quintin, the European Commission’s Director-General of Education and Culture, announced she would call for proposals before the end of the year, with the first classification appearing in 2010.

A European classification would probably be compiled along the same lines as the German Centre for Higher Education Development Excellence Ranking.

European actors are being spurred into such action by multiple forces, some internal (including the perceived need to ‘modernize European universities in the context of Lisbon and the European Research Area), some external (Shanghai Jiao Tong; Times Higher QS), and some of a global dimension (e.g., audit culture; competition for mobile students).

eurankingsprogThis latest push is also due to the French presidency of the Council of the European Union, as noted above, which is facilitating action at the regional and national scales. See, for example, details on a Paris-based conference titled ‘International comparison of education systems: a european model?’ which was held on 13-14 November 2008. As noted in the programme, the:

objective of the conference is to bring to the fore the strengths and weaknesses of the different international and European education systems, while highlighting the need for regular and objective assessment of the reforms undertaken by European Member States by means of appropriate indicators. It will notably assist in taking stock of:
– the current state and performance of the different European education systems:
– the ability of the different European education systems to curb the rate of failure in schools,
– the relative effectiveness of amounts spent on education by the different Member States.

The programme and list of speakers is worth perusing to acquire a sense of the broad agenda being put forward.

Multi-scalar governance vs (?) recurring revenue: the emerging dual logics of the rankings phenomenon

The rankings phenomenon is here to stay. But which logics will prevail, or at least emerge as the most important in shaping the extension of audit culture into the spheres of higher education and research?  At the moment it appears that the two main logics are:

  • Creating a new niche industry to form markets and generate recurrent revenue; and,
  • Creating new multi-scalar governance technologies to open up previously opaque higher education and research systems, so as to facilitate strategic restructuring for the knowledge economy.

These dual logics are in some ways contradictory, yet in other ways they are interdependent. This is a phenomenon that also has deep roots in the emerging centres of global higher ed and research calculation that are situated in London, Shanghai, New York, Brussels, and Washington DC.  And it is underpinned by the analytical cum revenue generating technologies provided by the Scientific division of Thomson Reuters, which develops and operates the ISI Web of Knowledge.

Market-making and governance enabling…and all unfolding before our very eyes. Yet do we really know enough about the nature of the unfolding process, including the present and absent voices, that seems to be bringing these logics to the fore?

Kris Olds

Times Higher Education – QS World University Rankings (2008): a niche industry in formation?

The new Times Higher Education – QS World University Rankings (2008) rankings were just released, and the copyright regulations deepen and extend, push and pull, enable and constrain.  Global rankings: a niche industry in formation?

Kris Olds