Quantitative metrics for “research excellence” and global positioning

rgupanel.jpgIn last week’s conference on Realising the Global University, organised by the Worldwide Universities Network (WUN), Professor David Eastwood, Chief Executive of the Higher Education Funding Council for England (HEFCE), spoke several times about the role of funding councils in governing universities and academics to enhance England’s standing in the global higher education sphere (‘market’ is perhaps a more appropriate term given the tone of discussions). One of the interesting dimensions of Eastwood’s position was the uneasy yet dependent relationship HEFCE has on bibliometrics and globally-scaled university ranking schemes to frame the UK’s position, taking into account HEFCE’s influence over funding councils in England, Scotland, Wales and Northern Ireland (which together make up the UK). Eastwood expressed satisfaction with the UK’s relative standing, yet (a) concern about emerging ‘Asian’ countries (well really just China, and to a lesser degree Singapore), (b) the need to compete with research powerhouses (esp., the US), and (c) the need to forge linkages with research powerhouses and emerging ‘contenders’ (ideally via joint UK-US and UK-China research projects, which are likely to lead to more jointly written papers; papers that are posited to generate relatively higher citation counts). These comments help us better understand the opening of a Research Councils UK (RCUK) office in China on 30 October 2007.

hefcecover.jpgIn this context, and further to our 9 November entry on bibliometrics and audit culture, it is worth noting that HEFCE launched a consultation process today about just this – bibliometrics as the core element of a new framework for assessing and funding research, especially with respect to “science-based” disciplines. HEFCE notes that “some key elements in the new framework have already been decided” (i.e., get used to the idea, and quick!), and that the consultation process is instead focused on “how they should be delivered”. Elements of the new framework include (but are not limited to):

  • Subject divisions: within an overarching framework for the assessment and funding of research, there will be distinct approaches for the science-based disciplines (in this context, the sciences, technology, engineering and medicine with the exception of mathematics and statistics) and for the other disciplines. This publication proposes where the boundary should be drawn between these two groups and proposes a subdivision of science-based disciplines into six broad subject groups for assessment and funding purposes.
  • Assessment and funding for the science-based disciplines will be driven by quantitative indicators. We will develop a new bibliometric indicator of research quality. This document builds on expert advice to set out our proposed approach to generating a quality profile using bibliometric data, and invites comments on this.
  • Assessment and funding for the other disciplines: a new light touch peer review process informed by metrics will operate for the other disciplines (the arts, humanities, social sciences and mathematics and statistics) in 2013. We have not undertaken significant development work on this to date. This publication identifies some key issues and invites preliminary views on how we should approach these.
  • Range and use of quantitative indicators: the new funding and assessment framework will also make use of indicators of research income and numbers of research students. This publication invites views on whether additional indicators should be used, for example to capture user value, and if so on what basis.
  • Role of the expert panels: panels made up of eminent UK and international practising researchers in each of the proposed subject groups, together with some research users, will be convened to advise on the selection and use of indicators within the framework for all disciplines, and to conduct the light touch peer review process in non science-based disciplines. This document invites proposals for how their role should be defined within this context.
  • Next steps: the paper identifies a number of areas for further work and sets out our proposed workplan and timetable for developing and introducing the new framework, including further consultations and a pilot exercise to help develop a method for producing bibliometric quality indicators.
  • Sector impact: a key aim in developing the framework will be to reduce the burden on researchers and higher education institutions (HEIs) created by the current arrangements. We also aim for the framework to promote equal opportunities. This publication invites comments on where we need to pay particular attention to these issues in developing the framework and what more can be done.

This process is worth following even if you are not working for a UK institution for it sheds light on the emerging role of bibliometrics as a governing tool (which is evident in more and more countries), especially with respect to the global (re)positioning of national higher education systems vis a vis a particular understandings of ‘research quality’ and ‘productivity’. Over time, of course, it will also transform some of the behaviour of many UK academics, perhaps spurring on everything from heightened competition to get into high citation impact (CIF) factor journals, greater international collaborative work (if such work indeed generates more citations), the possible creation of “citation clubs” (much more easily done, perhaps, that HEFCE realizes), less commitment to high quality teaching, and a myriad of other unknown impacts, for good and for bad, by the time the new framework is “fully driving all research funding” in 2014.

Kris Olds

Citation indices, bibliometrics, and audit culture: new research findings

uukbiblio.jpgThe use of citation indices (bibliometrics, more broadly, which includes article counting and citation counts) is sweeping much of the world of higher education governance. Citation indices, including the Thomson Scientific produced Social Science Citation Index, and the Science Citation Index, and even Google Scholar, are undeniably part of modern academic life, though they are highly contested, reflective and disproportionately supportive of the Anglo-American world (including English as lingua franca), and little understood. Like rankings schemes (which witnessed a flurry of activity this week: see the Times Higher Education Supplement global ranking results, Beerkens’ Blog on the geographies of the new THES rankings, and the Macleans results at a Canadian scale), bibliometrics are used to analyze scholarly activity, and frame the distribution of resources (from the individual up to the institutional scale). They are increasingly used to govern academic life, for good and for bad, and they produce a myriad of impacts that need to be much more fully explored.

The UK makes particularly heavy use of bibliometrics, spurred on by their Research Assessment Exercise (RAE). For this reason UK higher education institutions should, therefore (one would hope!), have a more advanced understanding of the uses and abuses of this analytical cum governance tool. It is thus worth noting that Universities UK (UUK) released a new report today on the topic – The use of bibliometrics to measure research quality in UK higher education institutions – to generate discussion about how to reform the much maligned RAE process. The report was produced Evidence Ltd., a UK “knowledge-based company specializing in data analysis, reports and consultancy focusing on the international research base”. jadams.jpg

Evidence is led by Jonathan Adams, pictured here at a Worldwide Universities Network conference, and the person who wrote an illuminating chapter in the Rand Corporation report (Perspectives on U.S. Competitiveness in Science and Technology) we recently profiled.

Evidence also has a “strategic alliance” with Thomson Scientific (previously known as Thomson ISI) that produces the citation indices noted above.

As the UUK press release notes:

The report assesses the use of bibliometrics in both STEM (science, technology, engineering and mathematics) and non-STEM subjects, and the differences in citation behaviour among subject disciplines.

Professor Eric Thomas, chair of Universities UK’s Research Policy Committee, said: “It is widely anticipated that bibliometrics will be central to the new system, but we need to ensure it is technically correct and able to inspire confidence among the research community.

The report highlights that:

  • Bibliometrics are probably the most useful of a number of variables that could feasibly be used to measure research performance.
  • There is evidence that bibliometric indices do correlate with other, quasi-independent measures of research quality – such as RAE grades – across a range of fields in science and engineering.
  • There is a range of bibliometric variables as possible quality indicators. There are strong arguments against the use of (i) output volume (ii) citation volume (iii) journal impact and (iv) frequency of uncited papers.
  • ‘Citations per paper’ is a widely accepted index in international evaluation. Highly-cited papers are recognised as identifying exceptional research activity.
  • Accuracy and appropriateness of citation counts are a critical factor.
  • There are differences in citation behaviour among STEM and non-STEM as well as different subject disciplines.
  • Metrics do not take into account contextual information about individuals, which may be relevant. They also do not always take into account research from across a number of disciplines.
  • The definition of the broad subject groups and the assignment of staff and activity to them will need careful consideration.
  • Bibliometric indicators will need to be linked to other metrics on research funding and on research postgraduate training.
  • There are potential behavioural effects of using bibliometrics which may not be picked up for some years
  • There are data limitations where researchers’ outputs are not comprehensively catalogued in bibliometrics databases.

See here for one early reaction from the Guardian‘s Donald Macleod. This report, and subsequent reactions, are more food for fodder in ongoing debates about the global higher ed audit culture that is emerging, like it or not…

Kris Olds