University institutional performance: HEFCE, UK universities and the media

deem11 This entry has been kindly prepared by Rosemary Deem, Professor of Sociology of Education, University of Bristol, UK. Rosemary’s expertise and research interests are in the area of higher education, managerialism, governance, globalization, and organizational cultures (student and staff).

Prior to her appointment at Bristol, Rosemary was Dean of Social Sciences at the University of Lancaster. Rosemary has served as a member of ESRC Grants Board 1999-2003, and Panel Member of the Education Research Assessment Exercise 1996, 2001, 2008.

GlobalHigherEd invited Rosemary to respond to one of the themes (understanding institutional performance) in the UK’s Higher Education Debate aired by the Department for Innovation, Universities and Skills  (DIUS) over 2008.

~~~~~~~~~~~~~~

Institutional performance of universities and their academic staff and students is a very topical issue in many countries, for potential students and their families and sponsors, governments and businesses. As well as numerous national rankings, two annual international league tables in particular, the Shanghai Jiao Tong,  developed for the Chinese government to benchmark its own universities and the commercial Times Higher top international universities listings, are the focus of much government and institutional  interest,  as  universities vie with each other to appear in the top rankings of so-called world-class universities, even though the quest for world-class status has negative as well as positive consequences for national higher education systems (see here).

International league tables often build on metrics that are themselves international (e.g publication citation indexes) or use proxies for quality such as the proportions of international students or staff/student ratios, whereas national league tables tend to develop their own criteria, as the UK Research Assessment Exercise (RAE) has done and as its planned replacement, the Research Excellence Framework is intended to do. deem2

In March 2008, John Denham, Secretary of State for (the Department of) Innovation, Universities and Skills (or DIUS) commissioned the Higher Education Funding Council for England (HEFCE) to give some advice on measuring institutional performance. Other themes  on which the Minister commissioned advice, and which will be reviewed on GlobalHigherEd over the next few months, were On-Line Higher Education Learning, Intellectual Property and research benefits; Demographic challenge facing higher education; Research Careers; Teaching and the Student Experience; Part-time studies and Higher Education; Academia and public policy making; and International issues in Higher Education.

Denham identified five policy areas for the report on ‘measuring institutional performance’ that is the concern of this entry, namely: research, enabling business to innovate and engagement in knowledge transfer activity, high quality teaching, improving work force skills and widening participation.

This list could be seen as a predictable one since it relates to current UK government policies on universities and strongly emphasizes the role of higher education in producing employable graduates and relating its research and teaching to business and the ‘knowledge economy’.

Additionally, HEFCE already has quality and success measures and also surveys, such as the National Student Survey of all final year undergraduates for everything except workforce development.  The five areas are a powerful indicator of what government thinks the purposes of universities are, which is part of a much wider debate (see here and here).

On the other hand, the list is interesting for what it leaves out – higher education institutions and their local communities (which is not just about servicing business), or universities’ provision for supporting the learning of their own staff (since they are major employers in their localities) or the relationship between teaching and research

The report makes clear that HEFCE wants to “add value whilst minimising the unintended consequences”, (p. 2), would like to introduce a code of practice for the use of performance measures and does not want to introduce more official league tables in the five policy areas.  There is also a discussion about why performance is measured: it may be for funding purposes, to evaluate new policies, inform universities so they can make decisions about their strategic direction, improve performance or to inform the operation of markets. The disadvantages of performance measures, the tendency for some measures to be proxies (which will be a significant issue if plans to use metrics and bibliometrics  as proxies for research quality in  the new Research Excellence Framework are adopted) and the tendency to measure activity and volume but not impact are also considered in the report.

However, what is not emphasized enough are that the consequences once a performance measure is made public are not within anyone’s control.  Both the internet and the media ensure that this is a significant challenge.  It is no good saying that “Newspaper league tables do not provide an accurate picture of the higher education sector” (p 7) but then taking action which invalidates this point.

Thus in the RAE 2008, detailed cross-institutional results were made available by HEFCE to the media before they are available to the universities themselves last week, just so that newspaper league tables can be constructed.

Now isn’t this an example of the tail wagging the dog, and being helped by HEFCE to do so? Furthermore, market and policy incentives may conflict with each other.  If an institution’s student market is led by middle-class students with excellent exam grades, then urging them to engage in widening participation can fall on deaf ears.   Also, whilst UK universities are still in receipt of significant public funding, many also generate substantial private funding too and some institutional heads are increasingly irritated by tight government controls over what they do and how they do it.

Two other significant issues are considered in the report. One is value-added measures, which HEFCE feels it is not yet ready to pronounce on.  Constructing these for schools has been controversial and the question of over what period should value added measures be collected is problematic, since HEFCE measures would look only at what is added to recent graduates, not what happens to them over the life course as a whole.

The other issue is about whether understanding and measuring different dimensions of institutional performance could help to support diversity in the sector.  It is not clear how this would work for the following three reasons:

  1. Institutions will tend to do what they think is valued and has money attached, so if the quality of research is more highly valued and better funded than quality of teaching, then every institution will want to do research.
  2. University missions and ‘brands’ are driven by a whole multitude of factors and importantly by articulating the values and visions of staff and students and possibly very little by ‘performance’ measures; they are often appealing to an international as well as a national audience and perfect markets with detailed reliable consumer knowledge do not exist in higher education.
  3. As the HEFCE report points out, there is a complex relationship between research, knowledge transfer, teaching, CPD and workforce development in terms of economic impact (and surely social and cultural impact too?). Given that this is the case, it is not evident that encouraging HEIs to focus on only one or two policy areas would be helpful.

There is a suggestion in the report that web-based spidergrams based on an seemingly agreed (set of performance indicators might be developed which would allow users to drill down into more detail if they wished). Whilst this might well be useful, it will not replace or address the media’s current dominance in compiling league tables based on a whole variety of official and unofficial performance measures and proxies. Nor will it really address the ways in which the “high value of the UK higher education ‘brand’ nationally and internationally” is sustained.

Internationally, the web and word of mouth are more critical than what now look like rather old-fashioned performance measures and indicators.  In addition, the economic downturn and the state of the UK’s economy and sterling are likely to be far more influential in this than anything HEFCE does about institutional performance.

The report, whilst making some important points, is essentially introspective, fails to sufficiently grasp how some of its own measures and activities are distorted by the media, does not really engage with the kinds of new technologies students and potential students are now using (mobile devices, blogs, wikis, social networking sites, etc) and focuses far more on national understandings of institutional performance than on how to improve the global impact and understanding of UK higher education.

Rosemary Deem

Global university rankings 2007: interview with Simon Marginson

Editor’s note: The world is awash in discussion and debate about university (and disciplinary) ranking schemes, and what to do about them (e.g.  see our recent entry on this). Malaysia, for example, is grappling with a series of issues related to the outcome of the recent global rankings schemes, partly spurred on by ongoing developments, but also a new drive to create a differentiated higher education system (including so-called “Apex” universities). In this context Dr. Sarjit Kaur, Associate Research Fellow, IPPTN, Universiti Sains Malaysia, conducted an interview with Simon Marginson, Australian Professorial Fellow and Professor of Higher Education, Centre for the Study of Higher Education, The University of Melbourne. The interview was conducted on 22 November 2007.
~~~~~~~~~~~~~~~~~~~~~~~

Q: What is your overall first impression of the 2007 university rankings?

A: The Shanghai Jiao Tong (SHJT) rankings came out first and the ranking is largely valid. The outcome shows a domination of the large size based universities in the Western world, principally English-speaking countries and principally the US. There are no surprises in that when you look at the fact that the US spends seven times as much on higher education as the next nation, which is Japan, and that is seven times as much as a very big advantage in a competitive sense. The Times Higher Education Supplement (THES) rankings are not valid, in my view, I mean you have a survey which gets 1% return, is biased to certain countries and so on. The outcome tends to show that similar kinds of universities do well as in the top 50 anyway as in the SHJT because research-strong universities also have strong reputations and that shows up strongly in the THES, but the Times one is more plural with major universities in a number of countries (the oldest, largest, and best established universities in a number of countries) appear in the top 100 who aren’t strong enough in research terms to appear in the SHJT. But I don’t put any real value on the Times results – they go up and down very fast. Institutions that are in the top 100 then disappearing from the top 200 two years later, like Universiti Malaya did. It doesn’t mean too much.

Q: In both global university rankings, UK and US universities still dominate the top ten places. What’s your comment on this?

A: Well, it’s predictable that they would dominate in terms of a research measure because they have the largest concentration of research power – publications in English language journals, which mostly are edited from these countries and to their scholars in numbers. The Times is partly driven by research (only 1/5 of it is) and partly driven by the number of international students that people have – they tend to go to the UK and Australia more than they go to US but they tend to be in English-speaking countries as well. At times one half (50%) is determined by reputation as they’re reputational surveys at which one is 40% and the other is 10%. Now, reputation tends to follow established prestige and the English language, where the universities have the prestige as well. But the other factor is that the reputational surveys are biased in favour of countries which use the Times, read the Times and know the Times (usually in the British Empire) so it tends to be UK, Australia, New Zealand, Singapore, Malaysia and Hong Kong that put in a lot of survey returns whereas the Europeans don’t put in many; and many other Asian countries don’t put in many. So, that’s another reason why the English universities would do well. In fact the English universities do very well in the Times rankings – much better than they should really, considering their research strengths.

Q: What’s your comment on how most Asian universities performed in this year’s rankings?

A: Look, I think the SHJT is the one to watch because that gives you realistic measures of performance. The problem with SHJT is it tends to be a bit delayed – so that there’s a delay between the time you performed and the time it shows up in the rankings because the citation and publication measures are operating off the second half of the 90s; in the HiCis, Thomson HiCis count used by SHJT. So when the first half of the 2000 starts to show up, you’re going to see the National University of Singapore go up from the top 200 into the top 100 pretty fast. You will expect the Chinese universities will follow as well, a bit slower, so that Tsinghua and Peking Uni, Fudan, and Jiao Tong itself will move towards the top 200 and top 100 over time because they are really building up to many strengths. That would be a useful trend line to follow. Korean universities are also going to improve markedly in the rankings over time, with Seoul National leading the way. Japan’s already a major presence in the rankings of course. I wouldn’t expect any other Asian country, at this point, to start to show up strongly. It’s not the reason why the Malaysian universities should suddenly move up the research ranking table when they are not investing any more in research than they were before. It will be a long time before Malaysia starts creating an impact in the SHJT because if those China policy tomorrow requires universities to build on their basic research strengths which will involve sending selected people off abroad all the time for PhDs, establishing enough strengths in USM, UKM and UM and a couple more for major research bases at home and to have the capacity to train people at PhD level at home and so on, and be performing a lot of basic research. To do that you have to pay competitive salaries, you got to (like Singapore does) bring people back who might otherwise want to work in the US or UK…and that means paying something like UK salaries or if not, American ones. Then you’ll settle them down, and it’ll take them 5 years before they do their best output. Malaysia is perhaps better at marketing than it is with research performance because it has an International Education sector and because the government is quite active in promoting the university sector offshore and that’s good and that’s how it should be.

Q: What about the performance of Australian universities?

A: They performed as they should in the SHJT, which is to say we got 2 in the top 100. That’s not very good in the sense that when you look at Canada which is a country which is only slightly wealthier and about 2% bigger and a similar kind of culture and quality and it does much better. I mean it has 2 in the top 40 because it spends a lot more on research. Australia would do better in the SHJT if more than just ANU was being funded specially for research. Sydney, Queensland and West Australia were in the top 150, which is not a bad result and New South Wales is in the top 200, Adelaide and Monash were in the top 300 as is Macquarie I think. So it’s 9 in the top 300, which is reasonably good but there’s none in the top 50, which is not good. Australia is not there yet in being regarded a serious research power. In the THES rankings, Australian universities did extremely well because the survey vastly favours those countries which use the Times, know the Times and tend to return the surveys in higher than average numbers and Australia is one of those and because Australia’s International education sector is heavily promoted and because Australia has a lot of international students, which pushes its position up in the Internationalisation indicator. So Australia comes out scoring well in the THES rankings, having 11 universities in the top 100 and that’s just absurd when you look at the actual strengths of Australian universities and even their reputation worldwide, and they’re not strong in the same sense overall as research-based institutions. I’d say the same for British universities too – I mean they did too well. I mean University College London (UCL) this year is 9th in the ranking and stellar institutions like Stanford and University of California Berkeley were 19th and 22nd — this doesn’t make any sense and it’s a ludicrous result.

Q: It is widely acknowledged that in the higher education sector the keys to global competition are research performance and reputation. Do you think the rankings capture these aspects competently?

A: Well, I think the SHJT is not bad with research performance. There’s a lot of ways you can do this and I think using Nobel Prize is not really a good indicator because while the people who receive the prize in the Science and Economics are usually good people; someone said people who are just as good just never receive a prize – you know, because it’s submission-based and it’s all very open; it’s arguable as to whether it’s pure merit. I mean anyone who gets a prize has merit but it doesn’t mean it’s the highest merit of anyone possible that year. Given that the Nobel counts towards 30% of the total, I think it’s probably a little exaggerated in its impact. So I’d take that out and I’ll use something like the citation per head measure, which also appears in the THES rankings actually using similar data but which can be done with the SHJT database as well. But there are a lot of problems – one of the issues is the fact that for some disciplines, for example, cite more than others. Medicine cites much more heavily than engineering so that a university strong in medicine tends to look rather good in the Jiao Tong indicators compared to universities strong in engineering and many of the Chinese and universities in Singapore and Australia too are particularly strong in engineering so that doesn’t help them. But once you start to manipulate the data, you’re on a bit of a slippery slope downwards because there are many other ways you can do it. I think the best measures are probably those developed by Leiden University citation where they control for the size of the university and they control for the disciplines. They don’t take it any further than that and they are very careful and transparent when they do that. So that’s probably the best single set of research outcomes measures but there are always arguments both ways when you’re trying to create a level playing field and recognising true merit. The Times doesn’t measure reputation well when you have a survey with a 1% return rate and which is biased towards 4 or 5 countries and under-represents most of the others. That’s not a good way to measure reputation so we don’t know reputation from the point of view of the world, as the THES are basically UK university rankings.

Q: What kinds of methodological criticisms would you have against the SHJT in comparison to the THES?

A: I don’t think there’s anything that the THES does better; except that the SHJT uses the citation per head measure which is probably a good idea. The SHJT uses a per head measure of research performance as a whole which is probably a less valuable way to take into account size but I think the way Leiden does it is better than either in terms of size measure. That’s the only thing the THES does better and everything else the THES does a good deal worse so I wouldn’t want to implicate the THES in any circumstances. The other problem with the Times is the composite indicator — how do you equate student-staff ratio which is meant to be measured with teaching capacity? How can you give that 20% to research and 20% to reputation? What does that mean? Why? Why not give teaching 50%, why not give research 50%? I mean it’s so arbitrary. There’s no theory at the base of this. It’s just people sitting in a market research company and Times office, guessing about how to best manipulate the sector. The Social Science should be very critical of this kind of thing, regardless of how well or how badly the university is doing.

Q: In your opinion, have these global university rankings gained the trust or the confidence of mainstream public and policy credibility?

A: They’ll always get publicity if they come from apparently authoritative sources and they appear to cover the world. So it’s possible, as with the Times, to develop a bad ranking and get a lot of credibility but the Times now has lost a good deal of ground and the reason why it’s losing credibility, first in the informed circles like Social Science, then with the policy makers, then with the public and the media. And it’s results are so volatile and universities get treated so harshly by going up and down so fast when their performance is not changing. So everyone is now beginning to realize that there is no real relationship between the merit and the university and the outcome of the ranking. And once that happens, the ranking has no ground – it’s gone, it’s finished; and that’s what’s happening to the Times. I mean it will keep coming out for a bit longer but it might stop altogether because its credibility is really reducing now.

Q: To what extent do university rankings help intensify global competition for HiCi researchers or getting international doctoral students or the best postgraduate students?

A: I think the Jiao Tong has had a big impact in focusing attention on the number of countries in getting universities into the top 100 or even the top 500 for that matter (and in some countries the top 50 or top 20) and that is leading in some nations, you could name China and Germany for example, as places where the concentration of research investment is occurring to try to boost the position on individual universities and even disciplines because Jiao Tong also measures mean in 5 discipline areas as well, as does the Times. I think that kind of policy effect will continue and certainly by having a one world ranking, which is incredible such as the Jiao Tong, will help intensify global competition and lead everyone to see the world in terms of a single competition in higher education, particularly in research performance, which focuses attention on the high quality of researchers who comprise most of the research performers. I mean, studies show that 2-5% of researchers in most countries produce more than half of the outcomes in terms of publications and grants. Having this is helpful and it’s a good circumstance.

Q: Do you have any further comments on the issue of whether university rankings are on the right track? What’s your prediction for the future?

A: I think bad rankings tend to undermine themselves over time because their results are not credible. Good ranking systems are open to refinement and improvement and they tend to get stronger, and that’s exactly the case with the Jiao Tong. I think the next frontier with the rankings is the measurement of teaching performance and student quality. The added point of exit — whether it’s done as an evaluated thing or just as a once-off measure. The OECD is in the early stages of developing internationally comparable indicators of student competence – it might use just competency tests like problem solving skills, it may use discipline-based tests in areas like Physics which are common to many countries. It’s more difficult to use disciplines but on the other hand if you just use skills without knowledge, it’s also limited and perhaps open to question. The OECD has got many steps and problems in trying to do this and there are questions as to how this can be done — whether it’s within the frame of the institution or whether through national systems. There are many other questions about this and the technical problems are considerable just to get cross-country measures which are similar but this may well happen when you have ranking capacity on the basis of student outcomes, probably becomes more powerful than research performance in some ways; at least in terms of the international market. I mean research performance probably distinguishes universities from institutions and gives them prestige but teaching outcomes are also important. Once you can measure and establish comparability across countries and measure teaching outcomes that way, then it could be a new world.

End