University institutional performance: HEFCE, UK universities and the media

deem11 This entry has been kindly prepared by Rosemary Deem, Professor of Sociology of Education, University of Bristol, UK. Rosemary’s expertise and research interests are in the area of higher education, managerialism, governance, globalization, and organizational cultures (student and staff).

Prior to her appointment at Bristol, Rosemary was Dean of Social Sciences at the University of Lancaster. Rosemary has served as a member of ESRC Grants Board 1999-2003, and Panel Member of the Education Research Assessment Exercise 1996, 2001, 2008.

GlobalHigherEd invited Rosemary to respond to one of the themes (understanding institutional performance) in the UK’s Higher Education Debate aired by the Department for Innovation, Universities and Skills  (DIUS) over 2008.

~~~~~~~~~~~~~~

Institutional performance of universities and their academic staff and students is a very topical issue in many countries, for potential students and their families and sponsors, governments and businesses. As well as numerous national rankings, two annual international league tables in particular, the Shanghai Jiao Tong,  developed for the Chinese government to benchmark its own universities and the commercial Times Higher top international universities listings, are the focus of much government and institutional  interest,  as  universities vie with each other to appear in the top rankings of so-called world-class universities, even though the quest for world-class status has negative as well as positive consequences for national higher education systems (see here).

International league tables often build on metrics that are themselves international (e.g publication citation indexes) or use proxies for quality such as the proportions of international students or staff/student ratios, whereas national league tables tend to develop their own criteria, as the UK Research Assessment Exercise (RAE) has done and as its planned replacement, the Research Excellence Framework is intended to do. deem2

In March 2008, John Denham, Secretary of State for (the Department of) Innovation, Universities and Skills (or DIUS) commissioned the Higher Education Funding Council for England (HEFCE) to give some advice on measuring institutional performance. Other themes  on which the Minister commissioned advice, and which will be reviewed on GlobalHigherEd over the next few months, were On-Line Higher Education Learning, Intellectual Property and research benefits; Demographic challenge facing higher education; Research Careers; Teaching and the Student Experience; Part-time studies and Higher Education; Academia and public policy making; and International issues in Higher Education.

Denham identified five policy areas for the report on ‘measuring institutional performance’ that is the concern of this entry, namely: research, enabling business to innovate and engagement in knowledge transfer activity, high quality teaching, improving work force skills and widening participation.

This list could be seen as a predictable one since it relates to current UK government policies on universities and strongly emphasizes the role of higher education in producing employable graduates and relating its research and teaching to business and the ‘knowledge economy’.

Additionally, HEFCE already has quality and success measures and also surveys, such as the National Student Survey of all final year undergraduates for everything except workforce development.  The five areas are a powerful indicator of what government thinks the purposes of universities are, which is part of a much wider debate (see here and here).

On the other hand, the list is interesting for what it leaves out – higher education institutions and their local communities (which is not just about servicing business), or universities’ provision for supporting the learning of their own staff (since they are major employers in their localities) or the relationship between teaching and research

The report makes clear that HEFCE wants to “add value whilst minimising the unintended consequences”, (p. 2), would like to introduce a code of practice for the use of performance measures and does not want to introduce more official league tables in the five policy areas.  There is also a discussion about why performance is measured: it may be for funding purposes, to evaluate new policies, inform universities so they can make decisions about their strategic direction, improve performance or to inform the operation of markets. The disadvantages of performance measures, the tendency for some measures to be proxies (which will be a significant issue if plans to use metrics and bibliometrics  as proxies for research quality in  the new Research Excellence Framework are adopted) and the tendency to measure activity and volume but not impact are also considered in the report.

However, what is not emphasized enough are that the consequences once a performance measure is made public are not within anyone’s control.  Both the internet and the media ensure that this is a significant challenge.  It is no good saying that “Newspaper league tables do not provide an accurate picture of the higher education sector” (p 7) but then taking action which invalidates this point.

Thus in the RAE 2008, detailed cross-institutional results were made available by HEFCE to the media before they are available to the universities themselves last week, just so that newspaper league tables can be constructed.

Now isn’t this an example of the tail wagging the dog, and being helped by HEFCE to do so? Furthermore, market and policy incentives may conflict with each other.  If an institution’s student market is led by middle-class students with excellent exam grades, then urging them to engage in widening participation can fall on deaf ears.   Also, whilst UK universities are still in receipt of significant public funding, many also generate substantial private funding too and some institutional heads are increasingly irritated by tight government controls over what they do and how they do it.

Two other significant issues are considered in the report. One is value-added measures, which HEFCE feels it is not yet ready to pronounce on.  Constructing these for schools has been controversial and the question of over what period should value added measures be collected is problematic, since HEFCE measures would look only at what is added to recent graduates, not what happens to them over the life course as a whole.

The other issue is about whether understanding and measuring different dimensions of institutional performance could help to support diversity in the sector.  It is not clear how this would work for the following three reasons:

  1. Institutions will tend to do what they think is valued and has money attached, so if the quality of research is more highly valued and better funded than quality of teaching, then every institution will want to do research.
  2. University missions and ‘brands’ are driven by a whole multitude of factors and importantly by articulating the values and visions of staff and students and possibly very little by ‘performance’ measures; they are often appealing to an international as well as a national audience and perfect markets with detailed reliable consumer knowledge do not exist in higher education.
  3. As the HEFCE report points out, there is a complex relationship between research, knowledge transfer, teaching, CPD and workforce development in terms of economic impact (and surely social and cultural impact too?). Given that this is the case, it is not evident that encouraging HEIs to focus on only one or two policy areas would be helpful.

There is a suggestion in the report that web-based spidergrams based on an seemingly agreed (set of performance indicators might be developed which would allow users to drill down into more detail if they wished). Whilst this might well be useful, it will not replace or address the media’s current dominance in compiling league tables based on a whole variety of official and unofficial performance measures and proxies. Nor will it really address the ways in which the “high value of the UK higher education ‘brand’ nationally and internationally” is sustained.

Internationally, the web and word of mouth are more critical than what now look like rather old-fashioned performance measures and indicators.  In addition, the economic downturn and the state of the UK’s economy and sterling are likely to be far more influential in this than anything HEFCE does about institutional performance.

The report, whilst making some important points, is essentially introspective, fails to sufficiently grasp how some of its own measures and activities are distorted by the media, does not really engage with the kinds of new technologies students and potential students are now using (mobile devices, blogs, wikis, social networking sites, etc) and focuses far more on national understandings of institutional performance than on how to improve the global impact and understanding of UK higher education.

Rosemary Deem

Reactions to the ranking of universities: is Malaysia over-reacting?

thesqscover.jpgI have had a chance to undertake a quick survey among colleagues in other countries regarding reactions to the UK’s Times Higher World University Rankings 2007 in their respective countries.

A colleague in the UK noted that as one might expect from the home of one of the more notorious world rankings, and a higher education system obsessed with reputation, ‘league tables’ are much discussed in the UK. The UK government, specifically, the Higher Education Funding Council for England (HEFCE), as noted last week, has commissioned a major research into five ranking systems and their impact on higher education institutions in England. In other words, the UK government is very concerned with the whole business of ranking of universities, for the reputation of the UK as a global centre for higher education is at stake.

Another colleague reported that, among academics in the UK, that the reaction to the Times Higher rankings varies widely. Many people working in higher education are deeply sceptical and cynical about the value of such league tables, about their value, purpose and especially methodology. For the majority of UK universities that do not appear in the tables and are probably never likely to appear, the tables are of very little significance. However, for the main research-led universities they are a source of growing interest. These are the universities that see themselves as competing on the world stage. Whilst they will often criticise the methodologies in detail, they will still study the results very carefully and will certainly use good results for publicity and marketing. Several leading UK universities (e.g., Warwick) now have explicit targets, for example, to be in the top 25 or 50 by a particular year, and are developing strategies with this in mind. However, it is reported that most UK students pay little attention to the international tables, but universities are aware that rankings can have a significant impact on recruitment of international students.

In Hong Kong, the Times Higher rankings has been seriously discussed in both the media and by university presidents (some of whom received higher rankings this year, thus making it easier to request increased funding from government based on their success). Among scholars/academics, especially those familiar with the various university ranking systems (the Times Higher rankings and others, like the Shanghai Jiaotong University rankings), there is some scepticism, especially concerning the criteria used.

Rankings are a continuous source of debate in the Australian system, no doubt as a result of Australia’s strong focus on the international market. Both the Times Higher rankings and the recent one undertaken by the Melbourne Institute have resulted in quite strong debate, spurred by Vice Chancellors whose institutions do not score in the top.

In Brazil, it is reported that ranking of universities did not attract media attention and public debate for the very reason that university rankings have had no impact on the budgetary decision of the government. The more relevant issue in the higher education agenda in Brazil is social inclusion, thus public universities are rewarded by their plans for extending access to their undergraduate programs, especially if it includes large number of students per faculty. Being able to attract foreign students is secondary in nature to many universities. Thus, public universities have had and continue to have assured access to budget streams that reflects the Government’s historical level of commitment.

A colleague in France noted that the manner Malaysia, especially the Malaysian Cabinet of Ministers and the Parliament, reacted to Times Higher rankings is relatively harsh. It appears that, in the specific case of Malaysia, the ranking outcome is being used by politicians to ‘flog’ senior officials governing higher education systems and/or universities. And yet critiques of such ranking schemes and their methodologies (e.g., via numerous discussions in Malaysia, or via the OECD or University Ranking Watch) go unnoticed. Malaysia better watch out, as the world is indeed watching us.

Morshidi Sirat

University rankings: deliberations and future directions

I attended a conference (the Worldwide Universities Network-organised Realizing the Global University, with a small pre-event workshop) and an Academic Cooperation Association-organised workshop (Partners and Competitors: Analysing the EU-US Higher Education Relationship) last week. Both events were well run and fascinating. I’ll be highlighting some key themes and debates that emerged in them throughout several entries in GlobalHigherEd over the next two weeks.

top10cites.jpg

One theme that garnered a significant amount of attention in both places was the ranking of universities (e.g., see one table here from the recent Times Higher Education Supplement-QS ranking that was published a few weeks ago). In both London and Brussels stakeholders of all sorts spoke out, in mainly negative tones, about the impacts of ranking schemes. They highlighted all of the usual critiques that have emerged over the last several years; critiques that are captured in the:

Suffice it to say everyone is “troubled by” (“detests”, “rejects”, “can’t stand”, “loathes”, “abhors”, etc…) ranking schemes but at the same time the schemes are used when seen fit, which usually means by relatively highly ranked institutions and systems to legitimize their standing in the global higher ed world, or to (e.g., see the case of Malaysia) flog the politicians and senior officials governing higher education systems and/or universities.

If ranking schemes are here to stay, as they seem to be (despite the Vice-Chancellor of the University of Bristol emphasizing in London that “we only have ourselves to blame”), four themes emerged as to where the global higher ed world might be heading with respect to rankings:

iheprankings.jpg(1) Critique and reformulation. If rankings schemes are here to stay, as credit ratings agencies’ (e.g., Standard & Poor’s) products also are, then the schemes need to be more effectively and forcefully critiqued with an eye to the reformulation of existing methodologies. The Higher Education Funding Council for England (HEFCE), for example, is conducting research on ranking schemes, with a large report due to be released in February 2008. This comes on the back of the Institute for Higher Education Policy’s large “multi-year project to examine the ways in which college and university ranking systems influence decision making at the institutional and government policy levels”, and a multi-phase study by the OECD on the impact of rankings in select countries (see Phase I results here). On a related note, the three-year old Faculty Scholarly Productivity Index is continually being developed in response to critiques, though I also know of many faculty and administrators who think it is beyond repair.

(2) Extending the power and focus of rankings. This week’s Economist notes that the OECD is developing a plan for a January 2008 meeting of member education ministers where they will seek approval to “[L]ook at the end result—how much knowledge is really being imparted”. What this means, in the words of Andreas Schleicher, the OECD’s head of education research, is that rather “than assuming that because a university spends more it must be better, or using other proxy measures for quality, we will look at learning outcomes”. The article notes that the first rankings should be out in 2010, and that:

[t]he task the OECD has set itself is formidable. In many subjects, such as literature and history, the syllabus varies hugely from one country, and even one campus, to another. But OECD researchers think that problem can be overcome by concentrating on the transferable skills that employers value, such as critical thinking and analysis, and testing subject knowledge only in fields like economics and engineering, with a big common core.

Moreover, says Mr Schleicher, it is a job worth doing. Today’s rankings, he believes, do not help governments assess whether they get a return on the money they give universities to teach their undergraduates. Students overlook second-rank institutions in favour of big names, even though the less grand may be better at teaching. Worst of all, ranking by reputation allows famous places to coast along, while making life hard for feisty upstarts. “We will not be reflecting a university’s history,” says Mr Schleicher, “but asking: what is a global employer looking for?” A fair question, even if not every single student’s destiny is to work for a multinational firm.

Leaving aside the complexities and politics of this initiative, the OECD is, yet again, setting the agenda for the global higher ed world.

apollofinancials.jpg(3) Blissful ignorance. The WUN event had a variety of speakers from the private for profit world, including Jorge Klor de Alva, Senior Vice-President, Academic Excellence and Director of University of Phoenix National Resource Center. The University of Phoenix, for those of you who don’t know, is part of the Apollo Group, has over 250,000 students, and is highly profitable with global ambitions. I attended a variety of sessions where people like Klor de Alva spoke and they could really care less about ranking schemes for their target “market” is a “non-traditional” one that tends not to matter (to date) to the rankers. Revenue, operating margin and income, and net income (e.g., see Apollo’s financials from their 2006 Annual Report to the left), and the views of Wall Street analysts (but not the esteem of the intelligentsia), are what matter instead for these type of players.

(4) Performance indicators for “ordinary” universities. Several speakers and commentators suggested that the existing ranking schemes were frustrating to observe from the perspective of universities not ‘on the map’, especially if they would realistically never get on the map. Alternative schemes were discussed, including performance indicators that reflected the capacity of universities to meet local and regionally-specific needs; needs that are often ignored by highly ranked universities or the institutions developing the ranking methodologies. Thus a university could feel better or worse depending on how it does over time in the performance rankings. This perspective is akin to that put forward by Jennifer Robinson in her brilliant critique of the global cities ranking schemes that exist. Robinson’s book, Ordinary Cities: Between Modernity and Development (Routledge, 2006) is well worth reading if this is your take on ranking schemes.

The future directions that ranking schemes will take are uncertain, though what is certain is that when the OECD and major funding councils start to get involved then the politics of university rankings cannot help but heat up even more. This said presence and voice regarding the (re)shaping of schemes with distributional impacts always needs to be viewed in conjunction with attention to absence and silence.

Kris Olds