Ranking – in a different (CHE) way?

uwe_brandenburg_2006-005nl GlobalHigherEd has been profiling a series of entries on university rankings as an emerging industry and technology of governance. This entry has been kindly prepared for us by Uwe Brandenburg. Since 2006 Uwe has been project manager at the Centre for Higher Education Development (CHE) and CHE Consult, a think tank and consultancy focusing on higher education reform.  Uwe has an MA in Islamic Studies, Politics and Spanish from the University of Münster (Germany),  and an MscEcon in Politics from the University of Wales at Swansea.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Talking about rankings usually means talking about league tables. Values are calculated based on weighed indicators which are then turned into a figure, added and formed into an overall value, often with the index of 100 for the best institution counting down. Moreover, in many cases entire universities are compared and the scope of indicators is somewhat limited. We at the Centre for Higher Education Development (CHE) are highly sceptical about this approach. For more than 10 years we have been running our own ranking system which is so different to the point that  some experts  have argued that it might not be a ranking at all which is actually not true. Just because the Toyota Prius is using a very different technology to produce energy does not exclude it from the species of automobiles. What are then the differences?

uwe1

Firstly, we do not believe in the ranking of entire HEIs. This is mainly due to the fact that such a ranking necessarily blurs the differences within an institution. For us, the target group has to be the starting point of any ranking exercise. Thus, one can fairly argue that it does not help a student looking for a physics department to learn that university A is average when in fact the physics department is outstanding, the sociology appalling and the rest is mediocre. It is the old problem of the man with his head in the fire and the feet in the freezer. A doctor would diagnose that the man is in a serious condition while a statistician might claim that over all he is doing fine.

So instead we always rank on the subject level. And given the results of the first ExcellenceRanking which focused on natural sciences and mathematics in European universities with a clear target group of prospective Master and PhD students, we think that this proves the point;  only 4 institutions excelled in all four subjects; another four in three; while most excelled in only one subject. And this was in a quite closely related field.

uwe2

Secondly, we do not create values by weighing indicators and then calculating an overall value. Why is that? The main reason is that any weight is necessarily arbitrary, or in other words political. The person weighing decides which weight to give. By doing so, you pre-decide the outcome of any ranking. You make it even worse when you then add the different values together and create one overall value because this blurs differences between individual indicators.

Say a discipline is publishing a lot but nobody reads it. If you give publications a weight of 2 and citations a weight of one, it will look like the department is very strong. If you do it the other way, it will look pretty weak. If you add the values you make it even worse because you blur the difference between both performances. And those two indicators are even rather closely related. If you summarize results from research indicators with reputation indicators, you make things entirely irrelevant.

Instead, we let the indicator results stand for their own and let the user decide what is important for his or her personal decision-making process. e.g., in the classical ranking we allow the users to create “my ranking” so they can choose the indicators they want to look at and in which order.

Thirdly, we strongly object to the idea of league tables. If the values which create the table are technically arbitrary (because of the weighing and the accumulation), the league table positions create the even worse illusion of distinctive and decisive differences between places. They then bring alive the impression of an existing difference in quality (no time or space here to argue the tricky issue of what quality might be) which is measurable to the percentage point. In other words, that there is a qualitative and objectively recognizable measurable difference between place number 12 and 15. Which is normally not the case.

Moreover, small mathematical differences can create huge differences in league table positions. Take the THES QS: even in the subject cluster SocSci you find a mere difference of 4.3 points on a 100 point scale between league rank 33 and 43. In the overall university rankings, it is a meager 6.7 points difference between rank 21 and 41 going down to a slim 15.3 points difference between rank 100 and 200. That is to say, the league table positions of HEIs might differ by much less than a single point or less than 1% (of an arbitrarily set figure). Thus, it tells us much less than the league position suggests.

Our approach, therefore, is to create groups (top, middle, bottom) which are referring to the performance of each HEI relative to the other HEIs.

uwe3

This means our rankings are not as easily read as the others. However,  we strongly believe in the cleverness of the users. Moreover, we try to communicate at every possible level that every ranking (and therefore also ours) is based on indicators which are chosen by the ranking institution. Consequently, the results of the respective ranking can tell you something about how an HEI performs in the framework of what the ranker thinks interesting, necessary, relevant, etc. Rankings therefore NEVER tell you who is the best but maybe (depending on the methodology) who is performing best (or in our cases better than average) in aspects considered relevant by the ranker.

A small, but highly relevant aspect might be added here. Rankings (in the HE system as well as in other areas of life) might suggest that a result in an indicator proves that an institution is performing well in the area measured by the indicator. Well it does not. All an indicator does is hint at the fact that given the data is robust and relevant, the results give some idea of how close the gap is between the performance of the institution and the best possible result (if such a benchmark exists). The important word is “hint” because “indicare” – from which the word “indicator” derives – means exactly this: a hint, not a proof. And in the case of many quantitative indicators, the “best” or “better” is again a political decision if the indicator stands alone (e.g. are more international students better? Are more exchange agreements better?).

This is why we argue that rankings have a useful function in terms of creating transparency if they are properly used, i.e. if the users are aware of the limitations, the purpose, the target groups and the agenda of the ranking organization and if the ranking is understood as one instrument among various others fit to make whatever decision related to an HEI (study, cooperation, funding, etc.).

Finally, modesty is maybe what a ranker should have in abundance. Running the excellence ranking in three different phases (initial in 2007, second phase with new subjects right now, repetition of natural sciences just starting) I am aware of certainly one thing. However strongly we aim at being sound and coherent, and however intensely we re-evaluate our efforts, there is always the chance of missing something; of not picking an excellent institution. For the world of ranking, Einstein’s conclusion holds a lot of truth:

Not everything that can be counted, counts and not everything that counts can be counted.

For further aspects see:
http://www.che-ranking.de/cms/?getObject=47&getLang=de
http://www.che-ranking.de/cms/?getObject=44&getLang=de
Federkeil, Gero, Rankings and Quality Assurance in Higher Education, in: Higher Education in Europe, 33, (2008), S. 209-218
Federkeil, Gero, Ranking Higher Education Institutions – A European Perspective., in: Evaluation in Higher Education, 2, (2008), S. 35 – 52
Other researchers specialising in this (and often referring to our method) are e.g. Alex Usher, Marijk van der Wende or Simon Marginson.

Uwe Brandenburg

Regional content expansion in Web of Science®: opening borders to exploration

jim-testaEditor’s note: this guest entry was written by James Testa, Senior Director, Editorial Development & Publisher Relations, Thomson Reuters. It was originally published on an internal Thomson Reuters website. James Testa (pictured to the left) joined Thomson Reuters (then ISI) in 1983. From 1983 through 1996 he managed the Publisher Relations Department and was directly responsible for building and maintaining working relations with the over three thousand international scholarly publishers whose journals are indexed by Thomson Reuters.  In 1996 Mr. Testa was appointed the Director of Editorial Development. In this position he directed a staff of information professionals in the evaluation and selection of journals and other publication formats for coverage in the various Thomson Reuters products. In 2007 he was named Senior Director, Editorial Development & Publisher Relations.  In this combined role he continues to build content for Thomson Reuters products and work to increase efficiency in communication with the international STM publishing community. He is a member of the American Society of Information Science and Technology (ASIST) and has spoken frequently on behalf of Thomson Reuters in the Asia Pacific region, South America, and Europe.

Our thanks also go to Susan Besaw of Thomson Reuters for facilitating access to the essay. This guest entry ties in to one of our earlier entries on this topic (‘Thomson Reuters, China, and ‘regional’ journals: of gifts and knowledge production’), as well as a fascinating new entry (‘The Canadian Center of Science and Education and Academic Nationalism’) posted on the consistently excellent Scott Sommers’ Taiwan Blog.

~~~~~~~~~~~~~~~~~~~~~

thomsonreuterslogoThomson Reuters extends the power of its Journal Selection Process by focusing on the world’s best regional journals. The goal of this initiative is to enrich the collection of important and influential international journals now covered in Web of Science with a number of superbly produced journals whose content is of specific regional importance.

Since its inception nearly fifty years ago by Eugene Garfield, PhD, the primary goal of the Journal Selection Process has been to identify those journals which formed the core literature of the sciences, social sciences, and arts & humanities. These journals publish the bulk of scholarly research, receive the most citations from the surrounding literature, and have the highest citation impact of all journals published today. The journals selected for the Web of Science are, in essence, the scholarly publications that meet the broadest research needs of the international community of researchers. They have been selected on the basis of their high publishing standards, their editorial content, the international diversity of their contributing authors and editorial board members, and on their relative citation frequency and impact. International journals selected for the Web of Science define the very highest standards in the world of scholarly publishing.

In recent years, however, the user community of the Web of Science has expanded gradually from what was once a concentration of major universities and research facilities in the United States and Western Europe to an internationally diverse group including virtually all major universities and research centers in every region of the world. Where once the Thomson Reuters sales force was concentrated in Philadelphia and London, local staff are now committed to the service of customers at offices in Japan, Singapore, Australia, Brazil, China, France, Germany, Taiwan, India, and South Korea.

webofknowledgeAs the global distribution of Web of Science expands into virtually every region on earth, the importance of regional scholarship to our emerging regional user community also grows. Our approach to regional scholarship effectively extends the scope of the Thomson Reuters Journal Selection Process beyond the collection of the great international journal literature: it now moves into the realm of the regional journal literature. Its renewed purpose is to identify, evaluate, and select those scholarly journals that target a regional rather than an international audience. Bringing the best of these regional titles into the Web of Science will illuminate regional studies that would otherwise not have been visible to the broader international community of researchers.

In the Fall of 2006, the Editorial Development Department of Thomson Reuters began this monumental task. Under the direction of Maureen Handel, Manager of Journal Selection, the team of subject editors compiled a list of over 10,000 scholarly publications representing all areas of science, social science, the arts, and humanities. Over the next twelve months the team was able to select 700 regional journals for coverage in the Web of Science.

The Web of Science Regional Journal Profile

These regional journals are typically published outside the US or UK. Their content often centers on topics of regional interest or that are presented with a regional perspective. Authors may be largely from the region rather than an internationally diverse group. Bibliographic information is in English with the exception of some arts and humanities publications that are by definition in native language (e.g. literature studies). Cited references must be in the Roman alphabet. All journals selected are publishing on time and are formally peer reviewed. Citation analysis may be applied but the real importance of the regional journal is measured by the specificity of its content rather than its citation impact.

Subject Areas and Their Characteristics

These first 700 journals selected in 2007 included 161 Social Science titles, 148 Clinical Medicine titles, 108 Agriculture/Biology/Environmental Science titles, 95 Physics/Chemistry/Earth Science titles, 89 Engineering/Computing/Technology titles, 61 Arts/Humanities titles, and 38 Life Sciences titles. The editors’ exploration of each subject area surfaced hidden treasure.

Social Sciences:
The European Union and Asia Pacific regions yielded over 140 social science titles. Subject areas such as business, economics, management, and education have been enriched with regional coverage. Several fine law journals have been selected and will provide balance in an area normally dominated by US journals. Because of the characteristically regional nature of many studies in the social sciences, this area will provide a rich source of coverage that would otherwise not be available to the broader international community.

Clinical Medicine:
Several regional journals dealing with General Medicine, Cardiology, and Orthopedics have been selected. Latin America, Asia Pacific, and European Union are all well represented here. Research in Surgery is a growing area in regional journals. Robotic and other novel surgical technology is no longer limited to the developed nations but now originates in China and India as well and has potential use internationally.

The spread of diseases such as bird flu and SARS eastward and westward from Southeast Asia is a high interest topic regionally and internationally. In some cases host countries develop defensive practices and, if enough time elapses, vaccines. Regional studies on these critical subjects will now be available in Web of Science.

Agriculture/Biology/Environmental Sciences:
Many of the selected regional titles in this area include new or endemic taxa of interest globally. Likewise regional agriculture or environmental issues are now known to result in global consequences. Many titles are devoted to niche topics such as polar/tundra environment issues, or tropical agronomy. Desertification has heightened the value of literature from central Asian countries. Iranian journals report voluminously on the use of native, desert tolerant plants and animals that may soon be in demand by desertification threatened countries.

Physics/Chemistry/Earth Sciences:
Regional journals focused on various aspects of Earth Science are now available in Web of Science. These include titles focused on geology, geography, oceanography, meteorology, climatology, paleontology, remote sensing, and geomorphology. Again, the inherently regional nature of these studies provides a unique view of the subject and brings forward studies heretofore hidden.

Engineering/Computing/Technology:
Engineering is a subject of global interest. Regional Journals in this area typically present subject matter as researched by regional authors for their local audience. Civil and Mechanical Engineering studies are well represented, providing solutions to engineering problems arising from local geological, social, environmental, climatological, or economic factors.

Arts & Humanities:
The already deep coverage of Arts & Humanities in Web of Science is now enhanced by additional regional publications focused on such subjects as History, Linguistics, Archaeology, and Religion. Journals from countries in the European Union, Latin American, Africa, and Asia Pacific regions are included.

Life Sciences:
Life Sciences subject areas lending themselves to regional studies include parasitology, micro-biology, and pharmacology. A specific example of valuable regional activity is stem cell research. The illegality of stem cell studies in an increasing number of developed countries has moved the research to various Asian countries where it is of great interest inside and outside of the region.

Conclusion

The primary mission of the Journal Selection Process is to identify, evaluate and select the top tier international and regional journals for coverage in the Web of Science. These are the journals that have the greatest potential to advance research on a given topic. In the pursuit of this goal Thomson Reuters has partnered with many publishers and societies worldwide in the development of their publications. As an important by-product of the steady application of the Journal Selection Process, Thomson Reuters is actively involved in raising the level of research communication as presented in journals. The objective standards described in the Journal Selection Process will now be focused directly on a new and expansive body of literature. Our hope, therefore, is not only to enrich the editorial content of Web of Science, but also to expand relations with the world’s primary publishers in the achievement of our mutual goal: more effective communication of scientific results to the communities we serve.

James Testa

Author’s note: This essay was compiled by James Testa, Senior Director, Editorial Development & Publisher Relations. Special thanks to Editorial Development staff members Maureen Handel, Mariana Boletta, Rodney Chonka, Lauren Gala, Anne Marie Hinds, Katherine Junkins-Baumgartner, Chang Liu, Kathleen Michael, Luisa Rojo, and, Nancy Thornton for their critical reading and comments.

Multi-scalar governance technologies vs recurring revenue: the dual logics of the rankings phenomenon

Our most recent entry (‘University Systems Ranking (USR)’: an alternative ranking framework from EU think-tank‘) is getting heavy traffic these days, a sign that the rankings phenomenon just won’t go away.  Indeed there is every sign that debates about rankings will be heating up over the next 1-2 year in particular, courtesy of the desire of stakeholders to better understand rankings, generate ‘recurring revenue’ off of rankings, and provide new governance technologies to restructure higher education and research systems.

This said I continue to be struck, as I travel to selective parts of the world for work, by the diversity of scalar emphases at play.

eiffeleu1In France, for example, the broad discourse about rankings elevates the importance of the national (i.e., French) and regional (i.e., European) scales, and only then does the university scale (which I will refer to as the institutional scale in this entry) come into play in importance terms. This situation reflects the strong role of the national state in governing and funding France’s higher education system, and France’s role in European development debates (including, at the moment, presidency of the Council of the European Union).

In UK it is the disciplinary/field and then the institutional scales that matter most, with the institutional made up of a long list of ranked disciplines/fields. Once the new Research Assessment Exercise (RAE) comes out in late 2008 we will see the institutional assess the position of each of their disciplines/fields, which will then lead to more support or relatively rapid allocation of the hatchet at the disciplinary/field level. This is in part because much national government funding (via the Higher Education Funding Council for England (HEFCE), the Scottish Funding Council (SFC), the Higher Education Funding Council for Wales (HEFCW) and the Department for Employment and Learning, Northern Ireland (DEL)) to each university is structurally dependent upon the relative rankings of each university’s position in the RAE, which is the aggregate effect of the position of the array of fields/disciplines in any one university (see this list from the University of Manchester for an example). The UK is, of course, concerned about its relative place in the two main global ranking schemes, but it doing well at the moment so the scale of concern is of a lower order than most other countries (including all other European countries). Credit rating agencies also assess and factor in rankings with respect to UK universities (e.g. see ‘Passing judgment’: the role of credit rating agencies in the global governance of UK universities‘).

In the US – supposedly the most marketized of contexts – there is highly variably concern with rankings.  Disciplines/fields ranked by media outlets like U.S. News & World Report are concerned, to be sure, but U.S. News & World Report does not allocate funding. Even the National Research Council (NRC) rankings matter less in the USA given that its effects (assuming it eventually comes out following multiple delays) are more diffuse. The NRC rankings are taken note of by deans and other senior administrators, and also faculty, albeit selectively. Again, there is no higher education system in the US – there are systems. I’ve worked in Singapore, England and the US as a faculty member and the US is by far the least addled or concerned by ranking systems, for good and for bad.

While the diversity of ranking dispositions at the national and institutional levels is heterogeneous in nature, the global rankings landscape is continuing to change, and quickly. In the remainder of this entry we’ll profile but two dimensions of the changes.

Anglo-American media networks and recurrent revenue

ustheFirst, new key media networks, largely Anglo-American private sector networks, have become intertwined.  As Inside Higher Ed put it on 24 November:

U.S. News & World Report on Friday announced a new, worldwide set of university rankings — which is really a repackaging of the international rankings produced this year in the Times Higher Education-QS World University Rankings. In some cases, U.S. News is arranging the rankings in different ways, but Robert Morse, director of rankings at the magazine, said that all data and the methodology were straight from the Times Higher’s rankings project, which is affiliated with the British publication about higher education. Asked if his magazine was just paying for reprint rights, Morse declined to discuss financial arrangements. But he said that it made sense for the magazine to look beyond the United States. “There is worldwide competition for the best faculty, best students and best research grants and researchers,” he said. He also said that, in the future, U.S. News may be involved in the methodology. Lloyd Thacker, founder of the Education Conservancy and a leading critic of U.S. News rankings, said of the magazine’s latest project: “The expansion of a business model that has profited at the expense of education is not surprising. This could challenge leaders to distinguish American higher education by providing better indicators of quality and by helping us think beyond ranking.”

This is an unexpected initiative, in some ways, given that the Times Higher Education-QS World University Rankings are already available on line and US New and World Report is simply repackaging these for sale in the American market. Yet if you adopt a market-making perspective this joint venture makes perfect sense. Annual versions of the Times Higher Education-QS World University Rankings will be reprinted in a familiar (to US readers) format, thereby enabling London-based TSL Education Ltd., London/Paris/Singapore-based QS Quacquarelli Symonds, and Washington DC-based U.S. News and World Report to generate recurring revenue with little new effort (apart from repackaging and distribution in the US). The enabling mechanism is, in this case, reprint rights fees. As we have noted before, this is a niche industry in formation, indeed.

More European angst and action

And second, at the regional level, European angst (an issue we profiled on 6 July in ‘Euro angsts, insights and actions regarding global university ranking schemes‘) about the nature and impact of rankings is leading to the production of critical reports on rankings methodologies, the sponsorship of high powered multi-stakeholder workshops, and the emergence of new proposals for European ranking schemes.

ecjrccoverSee, for example, this newly released report on rankings titled Higher Education Rankings: Robustness Issues and Critical Assessment, which is published by the European Commission Joint Research Centre, Institute for the Protection and Security of the Citizen, Centre for Research on Lifelong Learning (CRELL)

The press release is here, and a detailed abstract of the report is below:

The Academic Ranking of World Universities carried out annually by the Shanghai’s Jiao Tong University (mostly known as the ‘Shanghai ranking’) has become, beyond the intention of its developers, a reference for scholars and policy makers in the field of higher education. For example Aghion and co-workers at the Bruegel think tank use the index – together with other data collected by Bruegel researchers – for analysis of how to reform Europe’s universities, while French President Sarkozy has stressed the need for French universities to consolidate in order to promote their ranking under Jiao Tong. Given the political importance of this field the preparation of a new university ranking system is being considered by the French ministry of education.

The questions addressed in the present analysis is whether the Jiao Tong ranking serves the purposes it is used for, and whether its immediate European alternative, the British THES, can do better.

Robustness analysis of the Jiao Tong and THES ranking carried out by JRC researchers, and of an ad hoc created Jiao Tong-THES hybrid, shows that both measures fail when it comes to assessing Europe’s universities. Jiao Tong is only robust in the identification of the top performers, on either side of the Atlantic, but quite unreliable on the ordering of all other institutes. Furthermore Jiao Tong focuses only on the research performance of universities, and hence is based on the strong assumption that research is a universal proxy for education. THES is a step in the right direction in that it includes some measure of education quality, but is otherwise fragile in its ranking, undeniably biased towards British institutes and somehow inconsistent in the relation between subjective variables (from surveys) and objective data (e.g. citations).

JRC analysis is based on 88 universities for which both the THES and Jiao Tong rank were available. European universities covered by the present study thus constitute only about 0.5% of the population of Europe’s universities. Yet the fact that we are unable to reliably rank even the best European universities (apart from the 5 at the top) is a strong call for a better system, whose need is made acute by today’s policy focus on the reform of higher education. For most European students, teachers or researchers not even the Shanghai ranking – taken at face value and leaving aside the reservations raised in the present study – would tell which university is best in their own country. This is a problem for Europe, committed to make its education more comparable, its students more mobile and its researchers part of a European Research Area.

Various attempts in EU countries to address the issue of assessing higher education performance are briefly reviewed in the present study, which offers elements of analysis of which measurement problem could be addressed at the EU scale. [my emphasis]

While ostensibly “European”, does it really matter that the Times Higher Education-QS World University Ranking is produced by firms with European headquarters, while the Jiao Tong ranking is produced by an institution based in China?

The divergent logics underlying the production of discourses about rankings are also clearly visible in two related statements. At the bottom of the European Commission’s Joint Research Centre report summarized above we see “Reproduction is authorised provided the source is acknowledged”, while the Times Higher Education-QS World University Rankings, a market-making discourse, is accompanied by a lengthy copyright warning that can be viewed here.

Yet do not, for a minute, think that ‘Europe’ does not want to be ranked, or use rankings, as much if not more than any Asian or American or Australian institution. At a disciplinary/field level, for example, debates are quickly unfolding about the European Reference Index for the Humanities (ERIH), a European Science Foundation (ESF) backed initiative that has its origins in deliberations about the role of the humanities in the European Research Area. The ESF frames it this way:

Humanities research in Europe is multifaceted and rich in lively national, linguistic and intellectual traditions. Much of Europe’s Humanities scholarship is known to be first rate. However, there are specifities of Humanities research, that can make it difficult to assess and compare with other sciences. Also,  it is not possible to accurately apply to the Humanities assessment tools used to evaluate other types of research. As the transnational mobility of researchers continues to increase, so too does the transdisciplinarity of contemporary science. Humanities researchers must position themselves in changing international contexts and need a tool that offers benchmarking. This is why ERIH (European Reference Index for the Humanities) aims initially to identify, and gain more visibility for top-quality European Humanities research published in academic journals in, potentially, all European languages. It is a fully peer-reviewed, Europe-wide process, in which 15 expert panels sift and aggregate input received from funding agencies, subject associations and specialist research centres across the continent. In addition to being a reference index of the top journals in 15 areas of the Humanities, across the continent and beyond, it is intended that ERIH will be extended to include book-form publications and non-traditional formats. It is also intended that ERIH will form the backbone of a fully-fledged research information system for the Humanities.

See here for a defense of this ranking system by Michael Worton (Vice-Provost, University College London, and a member of the ERIH steering committee).  I was particularly struck by this comment:

However, the aim of the ERIH is not to assess the quality of individual outputs but to assess dissemination and impact. It can therefore provide something that the RAE cannot: it can be used for aggregate benchmarking of national research systems to determine the international standing of research carried out in a particular discipline in a particular country.

Link here for a Google weblog search on this debate, while a recent Chronicle of Higher Education article (‘New Ratings of Humanities Journals Do More Than Rank — They Rankle’) is also worth reviewing.

Thus we see a new rankings initiative emerging to enable (in theory) Europe to better codify its highly developed humanities presence on the global research landscape, but in a way that will enable national (at the intra-European scale) peaks (and presumably) valleys of quality output to be mapped for the humanities, but also for specific disciplines/fields. Imagine the governance opportunities available, at multiple scales, if this scheme is operationalized.

And finally, at the European scale again, University World News noted, on 23 November, that:

The European Union is planning to launch its own international higher education rankings, with emphasis on helping students make informed choices about where to study and encouraging their mobility. Odile Quintin, the European Commission’s Director-General of Education and Culture, announced she would call for proposals before the end of the year, with the first classification appearing in 2010.

A European classification would probably be compiled along the same lines as the German Centre for Higher Education Development Excellence Ranking.

European actors are being spurred into such action by multiple forces, some internal (including the perceived need to ‘modernize European universities in the context of Lisbon and the European Research Area), some external (Shanghai Jiao Tong; Times Higher QS), and some of a global dimension (e.g., audit culture; competition for mobile students).

eurankingsprogThis latest push is also due to the French presidency of the Council of the European Union, as noted above, which is facilitating action at the regional and national scales. See, for example, details on a Paris-based conference titled ‘International comparison of education systems: a european model?’ which was held on 13-14 November 2008. As noted in the programme, the:

objective of the conference is to bring to the fore the strengths and weaknesses of the different international and European education systems, while highlighting the need for regular and objective assessment of the reforms undertaken by European Member States by means of appropriate indicators. It will notably assist in taking stock of:
– the current state and performance of the different European education systems:
– the ability of the different European education systems to curb the rate of failure in schools,
– the relative effectiveness of amounts spent on education by the different Member States.

The programme and list of speakers is worth perusing to acquire a sense of the broad agenda being put forward.

Multi-scalar governance vs (?) recurring revenue: the emerging dual logics of the rankings phenomenon

The rankings phenomenon is here to stay. But which logics will prevail, or at least emerge as the most important in shaping the extension of audit culture into the spheres of higher education and research?  At the moment it appears that the two main logics are:

  • Creating a new niche industry to form markets and generate recurrent revenue; and,
  • Creating new multi-scalar governance technologies to open up previously opaque higher education and research systems, so as to facilitate strategic restructuring for the knowledge economy.

These dual logics are in some ways contradictory, yet in other ways they are interdependent. This is a phenomenon that also has deep roots in the emerging centres of global higher ed and research calculation that are situated in London, Shanghai, New York, Brussels, and Washington DC.  And it is underpinned by the analytical cum revenue generating technologies provided by the Scientific division of Thomson Reuters, which develops and operates the ISI Web of Knowledge.

Market-making and governance enabling…and all unfolding before our very eyes. Yet do we really know enough about the nature of the unfolding process, including the present and absent voices, that seems to be bringing these logics to the fore?

Kris Olds

Euro angsts, insights and actions regarding global university ranking schemes

The Beerkens’ blog noted, on 1 July, how the university rankings effect has even gone as far as reshaping immigration policy in the Netherlands. He included this extract, from a government policy proposal (‘Blueprint for a modern migration policy’):

Migrants are eligible if they received their degree from a university that is in the top 150 of two international league tables of universities. Because of the overlap, the lists consists of 189 universities…

Quite the authority being vetted in ranking schemes that are still in the process of being hotly debated!

On this broad topic, I’ve been traveling throughout Europe this academic year, pursuing a project not related to rankings, yet again and again rankings come up as a topic of discussion, reminding us of the de-facto global governance power of rankings (and the rankers). Ranking schemes, especially the Shanghai Jiao Tong University’s Academic Ranking of World Universities, and The Times Higher-QS World University Rankings are generating both governance impacts, and substantial anxiety, in multiple quarters.

In response, the European Commission is funding some research and thinking on the topic, while France’s new role in the rotating EU Presidency is supposed to lead to some further focus and attention over the next six months. More generally, here is a random list of European or Europe-based initiatives to examine the nature, impacts, and politics of global rankings:

And here are some recent or forthcoming events:

Yet I can’t help but wonder why Europe, which generally has high quality universities, despite some significant challenges, did not seek to shed light on the pros and cons of the rankings phenomenon any earlier. In other words, despite the critical mass of brainpower in Europe, what has hindered a collective, integrated, and well-funded interrogation of the ranking schemes from emerging before the ranking effects and path dependency started to take hold? Of course there was plenty of muttering, and some early research about rankings, and one could argue that I am viewing this topic through a rear view mirror, but Europe was, arguably, somewhat late in digging into this topic considering how much of an impact these assessment cum governance schemes are having.

So, if absence matters as much as presence in the global higher ed world, let’s ponder the absence of a serious European critique, or at least interrogation of, rankings and the rankers, until now. Let me put forward four possible explanations.

First, action at a European higher education scale has been focused upon bringing the European Higher Education Area to life via the Bologna Process, which was formally initiated in 1999. Thus there were only so many resources – intellectual and material – that could be allocated to higher education, so the Europeans are only now looking outwards to the power of rankings and the rankers. In short, key actors with a European higher education and research development vision have simply been too busy to focus on the rankings phenomenon and its effects.

A second explanation might be that European stakeholders are, deep down, profoundly uneasy about competition with respect to higher education, of which benchmarking and ranking is a part. But, as the Dublin Institute of Technology’s Ellen Hazelkorn notes in Australia’s Campus Review (27 May 2008):

Rankings are the latest weapon in the battle for world-class excellence. They are a manifestation of escalating global competition and the geopolitical search for talent, and are now a driver of that competition and a metaphor for the reputation race. What started out as an innocuous consumer product – aimed at undergraduate domestic students – has become a policy instrument, a management tool, and a transmitter of social, cultural and professional capital for the faculty and students who attend high-ranked institutions….

In the post-massification higher education world, rankings are widening the gap between elite and mass education, exacerbating the international division of knowledge. They inflate the academic arms race, locking institutions and governments into a continual quest for ever increasing resources which most countries cannot afford without sacrificing other social and economic policies. Should institutions and governments allow their higher education policy to be driven by metrics developed by others for another purpose?

It is worth noting that Ellen Hazelkorn is currently finishing an OECD-sponsored study on the effects of rankings.

In short, institutions associated with European higher education did not know how to assertively critique (or at least interrogate) ranking schemes as they never realized, until more recently, how ranking schemes are deeply geopolitical and geoeconomic vehicles that enable the powerful to maintain their standing, and harness yet even more resources inward. Angst regarding competition dulled senses to the intrinsically competitive logic of global university ranking schemes, and the political nature of their being.

Third, perhaps European elites, infatuated as they are with US Ivy League universities, or private institutions like Stanford, just accepted the schemes for the results summarized in this table from an OECD working paper (July 2007) written by Simon Marginson and Marijk van der Wende:

for they merely reinforced their acceptance of one form of American exceptionalism that has been acknowledged in Europe for some time. In other words, can one expect critiques of schemes that identify and peg, at the top, universities that many European elites would kill to send their children to, to emerge? I’m not so sure. As with Asia (where I worked from 1997-2001), and now in Europe, people seem infatuated with the standing of universities like Harvard, MIT, and Princeton, but these universities really operate in a parallel universe. Unless European governments, or the EU, are willing to establish 2-3 universities like King Abdullah University of Science and Technology (KAUST) in Saudi Arabia recently did with a $10 billion endowment, then angling to compete with the US privates should just be forgotten about. The new European Institute of Innovation and Technology (EIT) innovative as it may become, will not rearrange the rankings results, assuming they should indeed be rearranged.

Following what could be defined as a fait accompli phase, national and European political leaders came to progressively view the low status of European universities in the two key rankings schemes – Shanghai, and Times Higher – as a problematic situation. Why? The Lisbon Strategy emerges in 2000, was relaunched in 2005, and slowly starts to generate impacts, while also being continually retuned. Thus, if the strategy is to “become the most competitive and dynamic knowledge-based economy in the world, capable of sustainable economic growth with more and better jobs and greater social cohesion”, how can Europe become such a competitive global force when universities – key knowledge producers – are so far off fast emerging and now hegemonic global knowledge production maps?

In this political context, especially given state control over higher education budgets, and the relaunched Lisbon agenda drive, Europe’s rankers of ranking schemes were then propelled into action, in trebuchet-like fashion. 2010 is, after all, a key target date for a myriad of European scale assessments.

Fourth, Europe includes the UK, despite the feelings of many on both sides of the Channel. Powerful and well-respected institutions, with a wealth of analytical resources, are based in the UK, the global centre of calculation regarding bibliometrics (which rankings are a part of). Yet what role have universities like Oxford, Cambridge, Imperial College, UCL, and so on, or stakeholder organizations like Universities UK (UUK) and the Higher Education Funding Council for England (HEFCE), played in shedding light on the pros and cons of rankings for European institutions of higher education? I might be uninformed but the critiques are not emerging from the well placed, despite their immense experience with bibliometrics. In short as rankings aggregate data that works at a level of abstraction that hoves universities into view, and places UK universities highly (up there with Yale, Harvard and MIT), then these UK universities (or groups like UUK) will inevitably be concerned about their relative position, not the position of the broader regional system of which they are part, nor the rigour of the ranking methodologies. Interestingly, the vast majority of the above initiatives I listed only include representatives from universities that are ranked relatively low by the two main ranking schemes that now hold hegemonic power. I could also speculate on why the French contribution to the regional debate is limited, but will save that for another day.

These are but four of many possible explanations for why European higher education might have been relatively slow to grapple with the power and effects of university ranking schemes considering how much angst and impacts they generate. This said, you could argue, as Eric Beerkens has in the comments section below, that the European response was actually not late off the mark, despite what I argued above. The Shanghai rankings emerged in June 2003, and I still recall the attention they generated when they were first circulated. Three to five years for sustained action in some sectors is pretty quick, while in some sectors it is not.

In conclusion, it is clear that Europe has been destabilized by an immutable mobile – a regionally and now globally understood analytical device that holds together, travels across space, and is placed in reports, ministerial briefing notes, articles, PPT presentations, newspaper and magazine stories, etc. And it is only now that Europe is seriously interrogating the power of such devices, the data and methodologies that underly their production, and the global geopolitics and geoeconomics that they are part and parcel of.

I would argue that it is time to allocate substantial European resources to a deep, sustained, and ongoing analysis of the rankers, their ranking schemes, and associated effects. Questions remain, though, about how much light will be shed on the nature of university rankings schemes, what proposals or alternatives might emerge, and how the various currents of thought in Europe converge or diverge as some consensus is sought. Some institutions in Europe are actually happy that this ‘new reality’ has emerged for it is perceived to facilitate the ‘modernization’ of universities, enhance transparency at an intra-university scale, and elevate the role of the European Commission in European higher education development dynamics. Yet others equate rankings and classification schema with neoliberalism, commodification, and Americanization: this partly explains the ongoing critiques of the typology initiatives I linked to above, which are, to a degree, inspired by the German Excellence initiative, which is in turn partially inspired by a vision of what the US higher education system is.

Regardless, the rankings topic is not about to disappear. Let us hope that the controversies, debates, and research (current and future) inspire coordinated and rigorous European initiatives that will shed more light on this new form of defacto global governance. Why? If Europe does not do it, no one else will, at least in a manner that recognizes the diverse contributions that higher education can and should make to development processes at a range of scales.

Kris Olds

23 July update: see here for a review of a 2 juillet 2008 French Senate proposal to develop a new European ranking system that better reflects the nature of knowledge production (including language) in France and Europe more generally.  The full report (French only) can be downloaded here, while the press release (French only) can be read here.  France is, of course, going to publish a Senate report in French, though the likely target audience for the broader message (including a critique of the Shanghai Jiao Tong University’s Academic Ranking of World Universities) only partially understands French.  Yet in some ways it would have been better to have the report released simultaneously in both French and English.  But the contradictions of France critiquing dominant ranking schemes for their bias towards the English language, in English, was likely too much to take. In the end though, the French critique is well worth considering, and I can’t help but think that the EU or one of the many emerging initiatives above would be wise to have the report immediately translated and placed on some relevant websites so that it can be downloaded for review and debate.

Thomson Reuters, China, and ‘regional’ journals: of gifts and knowledge production

Numerous funding councils, academics, multilateral organizations, media outlets, and firms, are exhibiting enhanced interest in the evolution of the Chinese higher education system, including its role as a site and space of knowledge production. See these three recent contributions, for example:

It is thus noteworthy that the “Scientific business of Thomson Reuters” (as they are now known) has been seeking to position itself as a key analyst of the changing contribution of China-based scholars to the global research landscape. As anyone who has worked in Asia knows, the power of bibliometrics is immense, and quickly becoming more so, within the relevant governance systems that operate across the region. The strategists at Scientific clearly have their eye on the horizon, and are laying the foundations for a key presence in future of deliberations about the production of knowledge in and on China (and the Asia-Pacific more generally).

Thomson and the gift economy

One of the mechanisms to establish a presence and effect is the production of knowledge about knowledge (in this case patents and ISI Web of Science citable articles), as well as gifts. On the gift economy front, yesterday marked the establishment of the first ‘Thomson Reuters Research Fronts Award 2008’, which was jointly sponsored Thomson Reuters and the Chinese Academy of Sciences (CAS) “Research Front Analysis Center”, National Science Library. The awards ceremony was held in the sumptuous setting of the Hotel Nikko New Century Beijing.

As the Thomson Reuters press release notes:

This accolade is awarded to prominent scientific papers and their corresponding authors in recognition of their outstanding pioneering research and influential contribution to international research and development (R&D). The event was attended by over 150 of the winners’ industry peers from leading research institutions, universities and libraries.

The award is significant to China’s science community as it accords global recognition to their collaborative research work undertaken across all disciplines and institutions and highlights their contribution to groundbreaking research that has made China one of the world’s leading countries for the influence of its scientific papers. According to the citation analysis based on data from Scientific’s Web of Science, China is ranked second in the world by number of scientific papers published in 2007. [my emphasis]

Thomson incorporates ‘regional’ journals into the Web of Science

It was also interesting to receive news two days ago that the Scientific business of Thomson Reuters has just added “700 new regional journals” to the ISI Web of Science, journals that “typically target a regional rather than international audience by approaching subjects from a local perspective or focusing on particular topics of regional interest”. The breakdown of newly included journals is below, and was kindly sent to me by Thomson Reuters:

Scientific only admits journals that meet international standard publishing practices, and include notable elements of English so as to enable the data base development process, as noted here:

All journals added to the Web of Science go through a rigorous selection process. To meet stringent criteria for selection, regional journals must be published on time, have English-language bibliographic information (title, abstract, keywords), and cited references must be in the Roman alphabet.

In a general sense, this is a positive development; one that many regionally-focused scholars have been crying out for for years. There are inevitably some issues being grappled with about just which ‘regional’ journals are included, the implications for authors and publishers to include English-language bibliographic information (not cheap on a mass basis), and whether it really matters in the end to a globalizing higher education system that seems to be fixated on international refereed (IR) journal outlets. Still, this is progress of a notable type.

Intellectual Property (IP) generation (2003-2007)

The horizon scanning Thomson Reuters is engaged in generates relevant information for many audiences. For example, see the two graphics below, which track 2003-2007 patent production rates and levels within select “priority countries”. The graphics are available in World IP Today by Thomson Reuters (2008). Click on them for a sensible (for the eye) size.

Noteworthy is the fact that:

China has almost doubled its volume of patents from 2003-2007 and will become a strong rival to Japan and the United States in years to come. Academia represents a key source of innovation in many countries. China has the largest proportion of academic innovation. This is strong evidence of the Chinese Government’s drive to strengthen its academic institutions

Thus we see China as a rapidly increasing producer of IP (in the form of patents), though in a system that is relatively more dependent upon its universities to act as a base for the production process. To be sure private and state-owned enterprises will become more significant over time in China (and Russia), but the relative importance of universities (versus firms or research-only agencies) in the knowledge production landscape is to be noted.

Through the production of such knowledge, technologies, and events, the Scientific business of Thomson Reuters seeks to function as the key global broker of knowledge about knowledge. Yet the role of this institution in providing and reshaping the architecture that shapes ever more scholars’ careers, and ever more higher education systems, is remarkably under-examined.

Kris Olds

ps: alas GlobalHigherEd is still being censored out in China as we use a WordPress.com blogging platform and the Chinese government is blanket censoring all WordPress.com blogs. So much for knowledge sharing!

Thomson Innovation, UK Research Footprints®, and global audit culture

Thomson Scientific, the private firm fueling the bibliometrics drive in academia, is in the process of positioning itself as the anchor point for data on intellectual property (IP) and research. Following tantalizers in the form of free reports such as World IP Today: A Thomson Scientific Report on Global Patent Activity from 1997-2006 (from which the two images below are taken), Thomson Scientific is establishing, in phases, Thomson Innovation, which will provide, when completed:

  • Comprehensive prior art searching with the ability to search patents and scientific literature simultaneously
  • Expanded Asian patent coverage, including translations of Japanese full-text and additional editorially enhanced abstracts of Chinese data
  • A fully integrated searchable database combining Derwent World Patent Index® (DWPISM) with full-text patent data to provide the most comprehensive patent records available
  • Support of strategic intellectual property decisions through:
    • powerful analysis and visualization tools, such as charting, citation mapping and search result ranking
    • and, integration of business and news resources
  • Enhanced collaboration capabilities, including customizable folder structures that enable users to organize, annotate, search and share relevant files.

thomsonpatent1.jpg

thomsonpatent2.jpg

Speaking of bibliometrics, Evidence Ltd., the private firm that is shaping some of the debates about the post-Research Assessment Exercise (RAE) system of evaluating research quality and impact in UK universities, recently released the UK Higher Education Research Yearbook 2007. This £255 (for higher education customers) report:

[P]rovides the means to gain a rapid overview of the research strengths of any UK Higher Education institution, and compare its performance with that of its peers. It is an invaluable tool for those wishing to assess their own institution’s areas of relative strength and weakness, as well as versatile directory for those looking to invest in UK research. It will save research offices in any organisation with R&D links many months of work, allowing administrative and management staff the opportunity to focus on the strategic priorities that these data will help to inform….

It sets out in clear diagrams and summary tables the research profile for Universities and Colleges funded for research. Research Footprints® compare each institution’s performance to the average for its sector, allowing strengths and weaknesses to be rapidly identified by research managers and by industrial customers.

See below, for one example of how a sample university (in this case the University of Warwick) has its “Research Footprint®” graphically represented. This image is included in a brief article about Warwick by Vice-Chancellor Nigel Thrift, and is available on Warwick’s News & Events website.

warwickfootprint.jpg

sasquatch.jpgGiven the metrics that are utilized, it is clear, even if the data is not published, that individual researchers’ footprints will be available for systematic and comparative analysis, thereby enabling the governance of faculty with the back-up of ‘data’, and the targeted recruitment of the ‘big foot’ wherever s/he resides (though Sasquatches presumably need not apply!).

Kris Olds

Quantitative metrics for “research excellence” and global positioning

rgupanel.jpgIn last week’s conference on Realising the Global University, organised by the Worldwide Universities Network (WUN), Professor David Eastwood, Chief Executive of the Higher Education Funding Council for England (HEFCE), spoke several times about the role of funding councils in governing universities and academics to enhance England’s standing in the global higher education sphere (‘market’ is perhaps a more appropriate term given the tone of discussions). One of the interesting dimensions of Eastwood’s position was the uneasy yet dependent relationship HEFCE has on bibliometrics and globally-scaled university ranking schemes to frame the UK’s position, taking into account HEFCE’s influence over funding councils in England, Scotland, Wales and Northern Ireland (which together make up the UK). Eastwood expressed satisfaction with the UK’s relative standing, yet (a) concern about emerging ‘Asian’ countries (well really just China, and to a lesser degree Singapore), (b) the need to compete with research powerhouses (esp., the US), and (c) the need to forge linkages with research powerhouses and emerging ‘contenders’ (ideally via joint UK-US and UK-China research projects, which are likely to lead to more jointly written papers; papers that are posited to generate relatively higher citation counts). These comments help us better understand the opening of a Research Councils UK (RCUK) office in China on 30 October 2007.

hefcecover.jpgIn this context, and further to our 9 November entry on bibliometrics and audit culture, it is worth noting that HEFCE launched a consultation process today about just this – bibliometrics as the core element of a new framework for assessing and funding research, especially with respect to “science-based” disciplines. HEFCE notes that “some key elements in the new framework have already been decided” (i.e., get used to the idea, and quick!), and that the consultation process is instead focused on “how they should be delivered”. Elements of the new framework include (but are not limited to):

  • Subject divisions: within an overarching framework for the assessment and funding of research, there will be distinct approaches for the science-based disciplines (in this context, the sciences, technology, engineering and medicine with the exception of mathematics and statistics) and for the other disciplines. This publication proposes where the boundary should be drawn between these two groups and proposes a subdivision of science-based disciplines into six broad subject groups for assessment and funding purposes.
  • Assessment and funding for the science-based disciplines will be driven by quantitative indicators. We will develop a new bibliometric indicator of research quality. This document builds on expert advice to set out our proposed approach to generating a quality profile using bibliometric data, and invites comments on this.
  • Assessment and funding for the other disciplines: a new light touch peer review process informed by metrics will operate for the other disciplines (the arts, humanities, social sciences and mathematics and statistics) in 2013. We have not undertaken significant development work on this to date. This publication identifies some key issues and invites preliminary views on how we should approach these.
  • Range and use of quantitative indicators: the new funding and assessment framework will also make use of indicators of research income and numbers of research students. This publication invites views on whether additional indicators should be used, for example to capture user value, and if so on what basis.
  • Role of the expert panels: panels made up of eminent UK and international practising researchers in each of the proposed subject groups, together with some research users, will be convened to advise on the selection and use of indicators within the framework for all disciplines, and to conduct the light touch peer review process in non science-based disciplines. This document invites proposals for how their role should be defined within this context.
  • Next steps: the paper identifies a number of areas for further work and sets out our proposed workplan and timetable for developing and introducing the new framework, including further consultations and a pilot exercise to help develop a method for producing bibliometric quality indicators.
  • Sector impact: a key aim in developing the framework will be to reduce the burden on researchers and higher education institutions (HEIs) created by the current arrangements. We also aim for the framework to promote equal opportunities. This publication invites comments on where we need to pay particular attention to these issues in developing the framework and what more can be done.

This process is worth following even if you are not working for a UK institution for it sheds light on the emerging role of bibliometrics as a governing tool (which is evident in more and more countries), especially with respect to the global (re)positioning of national higher education systems vis a vis a particular understandings of ‘research quality’ and ‘productivity’. Over time, of course, it will also transform some of the behaviour of many UK academics, perhaps spurring on everything from heightened competition to get into high citation impact (CIF) factor journals, greater international collaborative work (if such work indeed generates more citations), the possible creation of “citation clubs” (much more easily done, perhaps, that HEFCE realizes), less commitment to high quality teaching, and a myriad of other unknown impacts, for good and for bad, by the time the new framework is “fully driving all research funding” in 2014.

Kris Olds