Ranking – in a different (CHE) way?

uwe_brandenburg_2006-005nl GlobalHigherEd has been profiling a series of entries on university rankings as an emerging industry and technology of governance. This entry has been kindly prepared for us by Uwe Brandenburg. Since 2006 Uwe has been project manager at the Centre for Higher Education Development (CHE) and CHE Consult, a think tank and consultancy focusing on higher education reform.  Uwe has an MA in Islamic Studies, Politics and Spanish from the University of Münster (Germany),  and an MscEcon in Politics from the University of Wales at Swansea.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Talking about rankings usually means talking about league tables. Values are calculated based on weighed indicators which are then turned into a figure, added and formed into an overall value, often with the index of 100 for the best institution counting down. Moreover, in many cases entire universities are compared and the scope of indicators is somewhat limited. We at the Centre for Higher Education Development (CHE) are highly sceptical about this approach. For more than 10 years we have been running our own ranking system which is so different to the point that  some experts  have argued that it might not be a ranking at all which is actually not true. Just because the Toyota Prius is using a very different technology to produce energy does not exclude it from the species of automobiles. What are then the differences?

uwe1

Firstly, we do not believe in the ranking of entire HEIs. This is mainly due to the fact that such a ranking necessarily blurs the differences within an institution. For us, the target group has to be the starting point of any ranking exercise. Thus, one can fairly argue that it does not help a student looking for a physics department to learn that university A is average when in fact the physics department is outstanding, the sociology appalling and the rest is mediocre. It is the old problem of the man with his head in the fire and the feet in the freezer. A doctor would diagnose that the man is in a serious condition while a statistician might claim that over all he is doing fine.

So instead we always rank on the subject level. And given the results of the first ExcellenceRanking which focused on natural sciences and mathematics in European universities with a clear target group of prospective Master and PhD students, we think that this proves the point;  only 4 institutions excelled in all four subjects; another four in three; while most excelled in only one subject. And this was in a quite closely related field.

uwe2

Secondly, we do not create values by weighing indicators and then calculating an overall value. Why is that? The main reason is that any weight is necessarily arbitrary, or in other words political. The person weighing decides which weight to give. By doing so, you pre-decide the outcome of any ranking. You make it even worse when you then add the different values together and create one overall value because this blurs differences between individual indicators.

Say a discipline is publishing a lot but nobody reads it. If you give publications a weight of 2 and citations a weight of one, it will look like the department is very strong. If you do it the other way, it will look pretty weak. If you add the values you make it even worse because you blur the difference between both performances. And those two indicators are even rather closely related. If you summarize results from research indicators with reputation indicators, you make things entirely irrelevant.

Instead, we let the indicator results stand for their own and let the user decide what is important for his or her personal decision-making process. e.g., in the classical ranking we allow the users to create “my ranking” so they can choose the indicators they want to look at and in which order.

Thirdly, we strongly object to the idea of league tables. If the values which create the table are technically arbitrary (because of the weighing and the accumulation), the league table positions create the even worse illusion of distinctive and decisive differences between places. They then bring alive the impression of an existing difference in quality (no time or space here to argue the tricky issue of what quality might be) which is measurable to the percentage point. In other words, that there is a qualitative and objectively recognizable measurable difference between place number 12 and 15. Which is normally not the case.

Moreover, small mathematical differences can create huge differences in league table positions. Take the THES QS: even in the subject cluster SocSci you find a mere difference of 4.3 points on a 100 point scale between league rank 33 and 43. In the overall university rankings, it is a meager 6.7 points difference between rank 21 and 41 going down to a slim 15.3 points difference between rank 100 and 200. That is to say, the league table positions of HEIs might differ by much less than a single point or less than 1% (of an arbitrarily set figure). Thus, it tells us much less than the league position suggests.

Our approach, therefore, is to create groups (top, middle, bottom) which are referring to the performance of each HEI relative to the other HEIs.

uwe3

This means our rankings are not as easily read as the others. However,  we strongly believe in the cleverness of the users. Moreover, we try to communicate at every possible level that every ranking (and therefore also ours) is based on indicators which are chosen by the ranking institution. Consequently, the results of the respective ranking can tell you something about how an HEI performs in the framework of what the ranker thinks interesting, necessary, relevant, etc. Rankings therefore NEVER tell you who is the best but maybe (depending on the methodology) who is performing best (or in our cases better than average) in aspects considered relevant by the ranker.

A small, but highly relevant aspect might be added here. Rankings (in the HE system as well as in other areas of life) might suggest that a result in an indicator proves that an institution is performing well in the area measured by the indicator. Well it does not. All an indicator does is hint at the fact that given the data is robust and relevant, the results give some idea of how close the gap is between the performance of the institution and the best possible result (if such a benchmark exists). The important word is “hint” because “indicare” – from which the word “indicator” derives – means exactly this: a hint, not a proof. And in the case of many quantitative indicators, the “best” or “better” is again a political decision if the indicator stands alone (e.g. are more international students better? Are more exchange agreements better?).

This is why we argue that rankings have a useful function in terms of creating transparency if they are properly used, i.e. if the users are aware of the limitations, the purpose, the target groups and the agenda of the ranking organization and if the ranking is understood as one instrument among various others fit to make whatever decision related to an HEI (study, cooperation, funding, etc.).

Finally, modesty is maybe what a ranker should have in abundance. Running the excellence ranking in three different phases (initial in 2007, second phase with new subjects right now, repetition of natural sciences just starting) I am aware of certainly one thing. However strongly we aim at being sound and coherent, and however intensely we re-evaluate our efforts, there is always the chance of missing something; of not picking an excellent institution. For the world of ranking, Einstein’s conclusion holds a lot of truth:

Not everything that can be counted, counts and not everything that counts can be counted.

For further aspects see:
http://www.che-ranking.de/cms/?getObject=47&getLang=de
http://www.che-ranking.de/cms/?getObject=44&getLang=de
Federkeil, Gero, Rankings and Quality Assurance in Higher Education, in: Higher Education in Europe, 33, (2008), S. 209-218
Federkeil, Gero, Ranking Higher Education Institutions – A European Perspective., in: Evaluation in Higher Education, 2, (2008), S. 35 – 52
Other researchers specialising in this (and often referring to our method) are e.g. Alex Usher, Marijk van der Wende or Simon Marginson.

Uwe Brandenburg

University institutional performance: HEFCE, UK universities and the media

deem11 This entry has been kindly prepared by Rosemary Deem, Professor of Sociology of Education, University of Bristol, UK. Rosemary’s expertise and research interests are in the area of higher education, managerialism, governance, globalization, and organizational cultures (student and staff).

Prior to her appointment at Bristol, Rosemary was Dean of Social Sciences at the University of Lancaster. Rosemary has served as a member of ESRC Grants Board 1999-2003, and Panel Member of the Education Research Assessment Exercise 1996, 2001, 2008.

GlobalHigherEd invited Rosemary to respond to one of the themes (understanding institutional performance) in the UK’s Higher Education Debate aired by the Department for Innovation, Universities and Skills  (DIUS) over 2008.

~~~~~~~~~~~~~~

Institutional performance of universities and their academic staff and students is a very topical issue in many countries, for potential students and their families and sponsors, governments and businesses. As well as numerous national rankings, two annual international league tables in particular, the Shanghai Jiao Tong,  developed for the Chinese government to benchmark its own universities and the commercial Times Higher top international universities listings, are the focus of much government and institutional  interest,  as  universities vie with each other to appear in the top rankings of so-called world-class universities, even though the quest for world-class status has negative as well as positive consequences for national higher education systems (see here).

International league tables often build on metrics that are themselves international (e.g publication citation indexes) or use proxies for quality such as the proportions of international students or staff/student ratios, whereas national league tables tend to develop their own criteria, as the UK Research Assessment Exercise (RAE) has done and as its planned replacement, the Research Excellence Framework is intended to do. deem2

In March 2008, John Denham, Secretary of State for (the Department of) Innovation, Universities and Skills (or DIUS) commissioned the Higher Education Funding Council for England (HEFCE) to give some advice on measuring institutional performance. Other themes  on which the Minister commissioned advice, and which will be reviewed on GlobalHigherEd over the next few months, were On-Line Higher Education Learning, Intellectual Property and research benefits; Demographic challenge facing higher education; Research Careers; Teaching and the Student Experience; Part-time studies and Higher Education; Academia and public policy making; and International issues in Higher Education.

Denham identified five policy areas for the report on ‘measuring institutional performance’ that is the concern of this entry, namely: research, enabling business to innovate and engagement in knowledge transfer activity, high quality teaching, improving work force skills and widening participation.

This list could be seen as a predictable one since it relates to current UK government policies on universities and strongly emphasizes the role of higher education in producing employable graduates and relating its research and teaching to business and the ‘knowledge economy’.

Additionally, HEFCE already has quality and success measures and also surveys, such as the National Student Survey of all final year undergraduates for everything except workforce development.  The five areas are a powerful indicator of what government thinks the purposes of universities are, which is part of a much wider debate (see here and here).

On the other hand, the list is interesting for what it leaves out – higher education institutions and their local communities (which is not just about servicing business), or universities’ provision for supporting the learning of their own staff (since they are major employers in their localities) or the relationship between teaching and research

The report makes clear that HEFCE wants to “add value whilst minimising the unintended consequences”, (p. 2), would like to introduce a code of practice for the use of performance measures and does not want to introduce more official league tables in the five policy areas.  There is also a discussion about why performance is measured: it may be for funding purposes, to evaluate new policies, inform universities so they can make decisions about their strategic direction, improve performance or to inform the operation of markets. The disadvantages of performance measures, the tendency for some measures to be proxies (which will be a significant issue if plans to use metrics and bibliometrics  as proxies for research quality in  the new Research Excellence Framework are adopted) and the tendency to measure activity and volume but not impact are also considered in the report.

However, what is not emphasized enough are that the consequences once a performance measure is made public are not within anyone’s control.  Both the internet and the media ensure that this is a significant challenge.  It is no good saying that “Newspaper league tables do not provide an accurate picture of the higher education sector” (p 7) but then taking action which invalidates this point.

Thus in the RAE 2008, detailed cross-institutional results were made available by HEFCE to the media before they are available to the universities themselves last week, just so that newspaper league tables can be constructed.

Now isn’t this an example of the tail wagging the dog, and being helped by HEFCE to do so? Furthermore, market and policy incentives may conflict with each other.  If an institution’s student market is led by middle-class students with excellent exam grades, then urging them to engage in widening participation can fall on deaf ears.   Also, whilst UK universities are still in receipt of significant public funding, many also generate substantial private funding too and some institutional heads are increasingly irritated by tight government controls over what they do and how they do it.

Two other significant issues are considered in the report. One is value-added measures, which HEFCE feels it is not yet ready to pronounce on.  Constructing these for schools has been controversial and the question of over what period should value added measures be collected is problematic, since HEFCE measures would look only at what is added to recent graduates, not what happens to them over the life course as a whole.

The other issue is about whether understanding and measuring different dimensions of institutional performance could help to support diversity in the sector.  It is not clear how this would work for the following three reasons:

  1. Institutions will tend to do what they think is valued and has money attached, so if the quality of research is more highly valued and better funded than quality of teaching, then every institution will want to do research.
  2. University missions and ‘brands’ are driven by a whole multitude of factors and importantly by articulating the values and visions of staff and students and possibly very little by ‘performance’ measures; they are often appealing to an international as well as a national audience and perfect markets with detailed reliable consumer knowledge do not exist in higher education.
  3. As the HEFCE report points out, there is a complex relationship between research, knowledge transfer, teaching, CPD and workforce development in terms of economic impact (and surely social and cultural impact too?). Given that this is the case, it is not evident that encouraging HEIs to focus on only one or two policy areas would be helpful.

There is a suggestion in the report that web-based spidergrams based on an seemingly agreed (set of performance indicators might be developed which would allow users to drill down into more detail if they wished). Whilst this might well be useful, it will not replace or address the media’s current dominance in compiling league tables based on a whole variety of official and unofficial performance measures and proxies. Nor will it really address the ways in which the “high value of the UK higher education ‘brand’ nationally and internationally” is sustained.

Internationally, the web and word of mouth are more critical than what now look like rather old-fashioned performance measures and indicators.  In addition, the economic downturn and the state of the UK’s economy and sterling are likely to be far more influential in this than anything HEFCE does about institutional performance.

The report, whilst making some important points, is essentially introspective, fails to sufficiently grasp how some of its own measures and activities are distorted by the media, does not really engage with the kinds of new technologies students and potential students are now using (mobile devices, blogs, wikis, social networking sites, etc) and focuses far more on national understandings of institutional performance than on how to improve the global impact and understanding of UK higher education.

Rosemary Deem

UK-China partnerships and collaborations in higher education

Both China (PRC) and the Hong Kong SAR offer an expanding and highly competitive market opportunity for overseas higher education institutions (HEIs). As noted in a recent report commissioned by the British Council (UK-China-Hong Kong Transnational Education Project), a number of UK HEIs are providing hundreds of new ‘international’ degree programmes in Hong Kong and China.

According to the Hong Kong Education Bureau, in January 2008 there were over 400 degree programmes run by 36 different UK HEIs in Hong Kong. On the one hand, UK HEIs can be seen to work as independent operators, offering a number of courses to local students registered with the Hong Kong Education Bureau under the ‘Non-local Higher and Professional Education (Regulation) Ordinance’. At the same time, UK HEIs have also initiated a series of collaborations between UK and Hong Kong HEIs. These collaborations are exempted from registration under the Ordinance. In January 2008 there were over 150 registered- and 400 exempted-courses run by 36 different UK HEIs in Hong Kong.

These are a relatively recent phenomenon – according to the British Council Report, more than 40% of joint initiatives in Hong Kong were begun after 2003. Overall, the UK is a significant provider of international education services in Hong Kong, providing 63% of ‘non-local’ courses (compared to 22% from Australia, 5% from the USA and 1% from Canada). These links were bolstered by the ‘Memorandum of Understanding on Education Cooperation’ signed on 11th May 2006 by Arthur Li (Secretary for Education and Manpower HK) and Bill Rammell (Minister of State for Higher Education and Lifelong Learning UK). The memorandum aims, amongst other things, to strengthen partnerships and strategic collaboration between the UK and Hong Kong.

UK HEIs’ involvement in delivering HE in China is ostensibly less well developed. However, in 2006, UK HEIs provided the QAA (Quality Assurance Agency for Higher Education) with information on 352 individual links with 232 Chinese HE institutions or organisations. Some recent significant developments with respect to international ‘partnerships’ with Chinese institutions include Xi’an Jiaotong Liverpool University (XJTLU), located in Suzhou in China, and The University of Nottingham Ningbo, which is sponsored by the City of Ningbo, China, with cooperation from Zhejiang Wanli University. Other examples of UK-China international partnerships include: Leeds Metropolitan University and Zhejiang University of Technology; Queen Mary, University of London and Beijing University of Posts and Telecommunications; The Queen’s University of Belfast and Shenzhen University; and the University of Bedfordshire and the China Agricultural University.

In 2006, the QAA conducted audits of 10 selected partnerships between UK and Chinese HEIs in order to establish if and how UK institutions were maintaining academic standards within these partnerships. The main findings are that:

  • nearly half (82) of all UK higher education institutions reported that they are involved in some way in providing higher education opportunities in China;
  • there is great variety in the type of link used to deliver UK awards in China, the subjects studied and the nature of the awards;
  • in 2005-06 there were nearly 11,000 Chinese students studying in China for a UK higher education award, 3,000 of whom were on programmes that would involve them completing their studies in the UK;
  • institutions’ individual arrangements for managing the academic standards and quality of learning opportunities are generally comparable with programmes in the UK and reflect the expectations of the Code of practice for the assurance of academic quality and standards in higher education (Code of practice), Section 2: Collaborative provision and flexible and distributed learning (including e-learning), published by QAA.

The map profiled above was extracted from this report. A similar exercise was carried out in 2007 on partnerships between 6 UK HEIs and Hong Kong HEIs.

These practices and partnerships exemplify the international outlook of many UK HEIs, and underscore the perceived (significant) role of China in their future planning and policies. Unlike Hong Kong, China is seen as market ripe for expansion, with substantial unmet demand for higher education that will only grow into the future. China is by far the biggest ‘source’ country of international students globally, and UK institutions are increasingly recognising the possibility of taking their educational programmes to the students.

Johanna Waters

The ‘European Quality Assurance Register’ for higher education: from networks to hierarchy?

Quality assurance has been an important global dialogue, with quality assurance agencies embedded in the fabric of the global higher education landscape. These agencies are mostly made up of a network of nationally-located institutions, for example the Nordic Quality Assurance Network in Higher Education, or the US-based Council for Higher Education Accrediation.

Since the early 1990s, we have seen the development of regional and global networks of agencies, for instance the European Association for Quality Assurance in Higher Education, and the International Network for Quality Assurance Agencies in Higher Education which in 2007 boasted full membership from 136 organizations from 74 countries. Such networks both drive and produce processes of globalization and regionalization.

eqr-3.jpg

The emergence of ‘registers’–of the kind announced today with the launch of the European Quality Assurance Register (EQAR) by the E4 Group(ESU, The European University Association – EUA, The European Association of Institutions in Higher Education – EURASHE, The European Network of Quality Assurance Agencies – ENQA) – signals a rather different kind of ‘globalising’ development in the sector. In short we might see it as a move from a network of agencies to a register that acts to regulate the sector. It also signals a further development in the creation of a European higher education industry.

So, what will the EQAR to do? According to EQAR, its role is to

…provide clear and reliable information on the quality assurance agencies (QAAs) operating in Europe: this is a list of agencies that substantially comply with the European Standards and Guidelines for Quality Assurance (ESG) as adopted by the European ministers of higher education in Bergen 2005.

The Register is expected to:

  • promote student mobility by providing a basis for the increase of trust among higher education institutions;
  • reduce opportunities for “accreditation mills” to gain credibility;
  • provide a basis for governments to authorize higher educations institutions to choose any agency from the Register, if that is compatible with national arrangements;
  • provide a means for higher education institutions to choose between different agencies, if that is compatible with national arrangements; and
  • serve as an instrument to improve the quality of education.

eqr-2.jpg

All Quality Assurance Agencies that comply with the European Standards and Guidelines for Quality Assurance will feature on the register, with compliance secured through an external review process.

There will also be a Register Committee – an independent body comprising of 11 quality assurance experts, nominated by European stakeholder organisations. This committee will decide on the inclusion of the quality assurance agencies. The EQAR association, that operates the Register, will be managed by an Executive Board, composed of E4 representatives, and a Secretariat.

The ‘register’ not only formalises and institutionalises a new layer of quality assurance, but it generates a regulatory hierarchy over and above other public and private regulatory agencies. It also is intended to ensure the development of a European higher education industry with the stamp of regulatory approval to provide important information in the global marketplace.

Susan Robertson