Towards a Global Common Data Set for World University Rankers

Last week marked another burst of developments in the world university rankings sector, including two ‘under 50’ rankings. More specifically:

A coincidence? Very unlikely. But who was first with the idea, and why would the other ranker time their release so closely? We don’t know for sure, but we suspect the originator of the idea was Times Higher Education (with Thomson Reuters) as their outcome was formally released second. Moreover, the data analysis phase for the production of the THE 100 Under 50 was apparently “recalibrated” whereas the QS data and methodology was the same as their regular rankings – it just sliced the data different way. But you never know, for sure, especially given Times Higher Education‘s unceremonious dumping of QS for Thomson Reuters back in 2009.

Speaking of competition and cleavages in the world university rankings world, it is noteworthy that India’s University Grants Commission announced, on the weekend, that:

Foreign universities entering into agreement with their Indian counterparts for offering twinning programmes will have to be among the global top 500.

The Indian varsities on the other hand, should have received the highest accreditation grade, according to the new set of guidelines approved by University Grants Commission today.

“The underlining objective is to ensure that only quality institutes are permitted for offering the twinning programmes to protect the interest of the students,” a source said after a meeting which cleared the regulations on twinning programmes.

They said foreign varsities entering into tie-ups with Indian partners should be ranked among the top 500 by the Times Higher Education World University Ranking or by Shanghai Jiaotong University of the top 500 universities [now deemed the Academic Ranking of World Universities].

Why does this matter? We’d argue that it is another sign of the multi-sited institutionalization of world university rankings. And institutionalization generates path dependency and normalization. When more closely tied to the logic of capital, it also generates uneven development meaning that there are always winners and losers in the process of institutionalizing a sector. In this case the world’s second most populous country, with a fast growing higher education system, will be utilizing these rankings to mediate which universities (and countries) linkages can be formed with.

Now, there are obvious pros and cons to the decision made by India’s University Grants Commission, including reducing the likelihood that ‘fly-by-night’ operations and foreign for-profits will be able to link up with Indian higher education institutions when offering international collaborative degrees. This said, the establishment of such guidelines does not necessarily mean they will be implemented. But this news item from India, related news from Denmark and the Netherlands regarding the uses of rankings to guide elements of immigration policy (see ‘What if I graduated from Amherst or ENS de Lyon…; ‘DENMARK: Linking immigration to university rankings‘), as well as the emergence of the ‘under 50’ rankings, are worth reflecting on a little more. Here are two questions we’d like to leave you with.

First, does the institutionalization of world university rankings increase the obligations of governments to analyze the nature of the rankers? As in the case of ratings agencies, we would argue more needs to be known about the rankers, including their staffing, their detailed methodologies, their strategies (including with respect to monetization), their relations with universities and government agencies, potential conflicts of interest, so on. To be sure, there are some very conscientious people working on the production and marketing of world university rankings, but these are individuals, and it is important to set the rules of the game up so that a fair and transparent system exists. After all, world university rankers contribute to the generation of outcomes yet do not have to experience the consequences of said outcomes.

Second, if government agencies are going to use such rankings to enable or inhibit international linkage formation processes, not to mention direct funding, or encourage mergers, or redefine strategy, then who should be the manager of the data that is collected? Should it solely be the rankers? We would argue that the stakes are now too high to leave the control of the data solely in the hands of the rankers, especially given that much of it is provided for free by higher education institutions in the first place. But if not these private authorities, then who else? Or, if not who else, then what else?

While we were drafting this entry on Monday morning a weblog entry by Alex Usher (of Canada’s Higher Education Strategy Associates) coincidentally generated a ‘pingback’ to an earlier entry titled ‘The Business Side of World University Rankings.’ Alex Usher’s entry (pasted in below, in full) raises an interesting question that is worth of careful consideration not just because of the idea of how the data could be more fairly stored and managed, but also because of his suggestions regarding the process to push this idea forward:

My colleague Kris Olds recently had an interesting point about the business model behind the Times Higher Education’s (THE) world university rankings. Since 2009 data collection for the rankings has been done by Thomson Reuters. This data comes from three sources. One is bibliometric analysis, which Thomson can do on the cheap because it owns the Web of Science database. The second is a reputational survey of academics. And the third is a survey of institutions, in which schools themselves provide data about a range of things, such as school size, faculty numbers, funding, etc.

Thomson gets paid for its survey work, of course. But it also gets the ability to resell this data through its consulting business. And while there’s little clamour for their reputational survey data (its usefulness is more than slightly marred by the fact that Thomson’s disclosure about the geographical distribution of its survey responses is somewhat opaque) – there is demand for access for all that data that institutional research offices are providing them.

As Kris notes, this is a great business model for Thomson. THE is just prestigious enough that institutions feel they cannot say no to requests for data, thus ensuring a steady stream of data which is both unique and – perhaps more importantly – free. But if institutions which provide data to the system want any data out of this it again, they have to pay.

(Before any of you can say it: HESA’s arrangement with the Globe and Mail is different in that nobody is providing us with any data. Institutions help us survey students and in return we provide each institution with its own results. The Thomson-THE data is more like the old Maclean’s arrangement with money-making sidebars).

There is a way to change this. In the United States, continued requests for data from institutions resulted in the creation of a Common Data Set (CDS); progress on something similar has been more halting in Canada (some provincial and regional ones exist but we aren’t yet quite there nationally). It’s probably about time that some discussions began on an international CDS. Such a data set would both encourage more transparency and accuracy in the data, and it would give institutions themselves more control over how the data was used.

The problem, though, is one of co-ordination: the difficulties of getting hundreds of institutions around the world to co-operate should not be underestimated. If a number of institutional alliances such as Universitas 21 and the Worldwide Universities Network, as well as the International Association of Universities and some key university associations were to come together, it could happen. Until then, though, Thomson is sitting on a tidy money-earner.

While you could argue about the pros and cons of the idea of creating a ‘global common data set,’ including the likelihood of one coming into place, what Alex Usher is also implying is that there is a distinct lack of governance regarding world university rankers. Why are universities so anemic when it comes to this issue, and why are higher education associations not filling the governance space neglected by key national governments and international organizations? One answer is that their own individual self-interest has them playing the game as long as they are winning. Another possible answer is that they have not thought through the consequences, or really challenged themselves to generate an alternative. Another is that the ‘institutional research’ experts (e.g., those represented by the Association for Institutional Research in the case of the US) have not focused their attention on the matter. But whatever the answer, at the very least, we think that they at least need to be posing themselves a set of questions. And if it’s not going to happen now, when will it? Only after MIT demonstrates some high profile global leadership on this issue, perhaps with Harvard, like it did with MITx and edX?

Kris Olds & Susan L. Robertson

A case for free, open and timely access to world university rankings data

Well, the 2010 QS World University Rankings® were released last week and the results are continuing to generate considerable attention in the world’s media (link here for a pre-programmed Google news search of coverage).

For a range of reasons, news that QS placed Cambridge in the No. 1 spot, above Harvard, spurred on much of this media coverage (see, for example, these stories in Time, the Christian Science Monitor, and Al Jazeera). As Al Jazeera put it: “Did the Earth’s axis shift? Almost: Cambridge has nudged Harvard out of the number one spot on one major ranking system.”

Interest in the Cambridge over Harvard outcome led QS (which stands for QS Quacquarelli Symonds Ltd) to release this story (‘2010 QS World University Rankings® – Cambridge strikes back’). Do note, however, that Harvard scored 99.18/100 while QS gave Cambridge 100/100 (hence the 1/2 placing). For non-rankings watchers, Harvard had been pegged as No 1 for the previous five years in rankings that QS published in association with Times Higher Education.

As the QS story notes, the economic crisis in the US, as well as the reduction of other US universities with respect to their share of “international faculty,” was the main cause of Harvard’s slide:

In the US, cost-cutting reductions in academic staff hire are reflected among many of the leading universities in this year’s rankings. Yale also dropped 19 places for international faculty, Chicago dropped 8, Caltech dropped 20, and UPenn dropped 53 places in this measure. However, despite these issues the US retains its dominance at the top of the table, with 20 of the top 50 and 31 of the top 100 universities in the overall table.

Facts like these aside, what we would like to highlight is that all of this information gathering and dissemination — both the back-end (pre-ranking) provision of the data, and the front end (post-ranking) acquisition of the data — focuses the majority of costs on the universities and the majority of benefits on the rankers.

The first cost to universities is the provision of the data. As one of us noted in a recent entry (‘Bibliometrics, global rankings, and transparency‘):

Data demands are becoming very resource consuming for universities. For example, the QS template currently being dealt with by universities around the world shows 14 main categories with sub-categories for each: all together there are 60 data fields, of which 10 are critical to the QS ranking exercise, to be launched in October 2010. Path dependency dynamics clearly exist for once the pipelines are laid the complexity of data requests can be gradually ramped up.

Keep it mind that the data is provided for free, though in the end it is a cost primarily borne by the taxpayer (for most universities are public). It is the taxpayer that pays the majority of the administrators’ salaries to enable them to compile the data and submit it to the rankers.

A second, though indirect and obscured cost, relates to the use of rankings data by credit rating agencies like Moody’s or Standards and Poors in their ratings of the credit-worthiness of universities. We’ve reported on this in earlier blog entries (e.g., ‘‘Passing judgment’: the role of credit rating agencies in the global governance of UK universities‘). Given that cost of borrowing for universities is determined by their credit-worthiness, and rankings are used in this process, we can conclude that any increase in the cost of borrowing is actually also an increase in the cost of the university to the taxpayer.

Third, rankings can alter the views of people (students, faculty, investors) making decisions about mobility or resource allocation, and these decisions inevitably generate direct financial consequences for institutions and host city-regions. Given this it seems only fair that universities and city-region development agencies should be able to freely use the base rankings data for self-reflection and strategic planning, if they so choose to.

A fourth cost is subsequent access to the data. The rankings are released via a strategically planned media blitz, as are hints at causes for shifts in the placement of universities, but access to the base data — the data our administrative colleagues in universities in Canada, the US, the UK, Sweden, etc., supplied to the rankers — is not fully enabled.  Rather, this freely provided data is used as the basis for:

the development of ancillary services and benchmarking capabilities that can be sold back to universities, funding councils, foundations, regional organizations (e.g., the European Commission which is intensely involved in benchmarking and now bankrolling a European ranking scheme), and the like.

Consider, for example, this Thomson Reuters statement on their Global Institutional Profiles Project website:

The first use of the data generated in the Global Institutional Profiles Project was to inform the Times Higher Education World University Ranking. However, there are many other services that will rely on the Profiles Project data. For example the data can be used to inform customized analytical reporting or customized data sets for a specific customer’s needs.

Thomson Reuters is developing a platform designed for easy access and interpretation of this valuable data set. The platform will combine different sets of key indicators, with peer benchmarking and visualization tools to allow users to quickly identify the key strengths of institutions across a wide variety of aspects and subjects.

Now, as QS’s Ben Sowter put it:

Despite the inevitable efforts that will be required to respond to a wide variety of enquiries from academics, journalists and institutions over the coming days there is always a deep sense of satisfaction when our results emerge. The tension visibly lifts from the team as we move into a new phase of our work – that of explaining how and why it works as opposed to actually conducting the work.

This year has been the most intense yet, we have grown the team and introduced a new system, introduced new translations of surveys, spent more time poring over the detail in the Scopus data we receive, sent out the most thorough fact files yet to universities in advance of the release – we have driven engagement to a new level – evaluating, speaking to and visiting more universities than ever.

The point we would like to make is that the process of taking “engagement to a new level” — a process coordinated and enabled by QS Quacquarelli Symonds Ltd and Times Higher Education/Thomson Reuters — is solely dependent upon universities being willing to provide data to these firms for free.

Given all of these costs, access to all of the base data beyond the simple rankings available on websites like the THE World University Rankings 2010 (due out on 16 September), or QS World University Rankings Results 2010, should be freely accessible to all.

Detailed information should also be provided about which unit, within each university, provided the rankers with the data. This would enable faculty, students and staff within ranked institutions to engage in dialogue about ranking outcomes, methodologies, and so on, should they choose to. This would also prevent confusing mix-ups such as what occurred at the University of Waterloo (UW) this week when:

UW representative Martin van Nierop said he hadn’t heard that QS had contacted the university, even though QS’s website says universities are invited to submit names of employers and professors at other universities to provide opinions. Data analysts at UW are checking the rankings to see where the information came from.

And access to this data should be provided on a timely basis, as in exactly when the rankings are released to the media and the general public.

In closing, we are making a case for free, open and timely access to all world university rankings data from January 2011, ideally on a voluntary basis. Alternative mechanisms, including intergovernmental agreements in the context of the next Global Bologna Policy Forum (in 2012), could also facilitate such an outcome.

If we have learned anything to date about the open access debate, and ‘climategate’, greater transparency helps everyone — the rankers (who will get more informed and timely feedback about their adopted methodologies), universities (faculty, students & staff), scholars and students interested in the nature of ranking methodologies, government ministries and departments, and the taxpayers who support universities (and hence the rankers).

Inspiration for this case comes from many people, as well as the open access agenda that is partly driven on the principle that taxpayer funded research generates research outcomes that society should have free and open access to, and in a timely fashion.  Surely this open access principle applies just as well to university rankings data!

Another reason society deserves to have free, open and timely access to the data is that a change in practices will shed light on how the organizations ranking universities implement their methodologies; methodologies that are ever changing (and hence more open to error).

Finer-grained access to the data would enable us to check out exactly why, for example, Harvard deserved a 99.18/100 while Cambridge was allocated a 100/100. As professors who mark student papers, outcomes this close lead us to cross-check the data, lest we subtly favour one student over another for X, Y or Z reasons. And cross-checking is even more important given that ranking is a highly mediatized phenomenon, as is clearly evident this week betwixt and between releases of the hyper-competitive QS vs THE world university rankings.

Free, open and timely access to the world university rankings data is arguably a win-win-win scenario, though it will admittedly rebalance the current focus of the majority of the costs on the universities, and the majority of the benefits on the rankers. Yet it is in the interest of the world’s universities, and the taxpayers who support these universities, for this to happen.

Kris Olds & Susan Robertson

CHERPA-network based in Europe wins tender to develop alternative global ranking of universities

rankings 4

Finally the decision on who has won the European Commission’s million euro tender – to develop and test a  global ranking of universities – has been announced.

The successful bid – the CHERPA network (or the Consortium for Higher Education and Research Performance Assessment), is charged with developing a ranking system to overcome what is regarded by the European Commission as the limitations of the Shanghai Jiao Tong and the QS-Times Higher Education schemes. The  final product is to be launched in 2011.

CHERPA is comprised of a consortium of leading institutions in the field within Europe; all have been developing and offering rather different approaches to ranking over the past few years (see our earlier stories here, here and  here for some of the potential contenders):

Will this new European Commission driven initiative set the proverbial European cat amongst the Transatlantic alliance pigeons?  rankings 1

As we have noted in earlier commentary on university rankings, the different approaches tip the rankings playing field in the direction of different interests. Much to the chagrin of the continental Europeans, the high status US universities do well on the Shanghai Jiao Tong University Ranking, whilst Britain’s QS-Times Higher Education tends to see UK universities feature more prominently.

CHERPA will develop a design that follows the so called ‘Berlin Principles on the ranking of higher education institutions‘. These principles stress the need to take into account the linguistic, cultural and historical contexts of the educational systems into account [this fact is something of an irony for those watchers following UK higher education developments last week following a Cabinet reshuffle – where reference to ‘universities’ in the departmental name was dropped.  The two year old Department for Innovation, Universities and Skills has now been abandoned in favor of a mega-Department for Business, Innovation and Skills! (read more here)].

According to one of the Consortium members website –  CHE:

The basic approach underlying the project is to compare only institutions which are similar and comparable in terms of their missions and structures. Therefore the project is closely linked to the idea of a European classification (“mapping”) of higher education institutions developed by CHEPS. The feasibility study will include focused rankings on particular aspects of higher education at the institutional level (e.g., internationalization and regional engagement) on the one hand, and two field-based rankings for business and engineering programmes on the other hand.

The field-based rankings will each focus on a particular type of institution and will develop and test a set of indicators appropriate to these institutions. The rankings will be multi-dimensional and will – like the CHE ranking – use a grouping approach rather than simplistic league tables. In contrast to existing global rankings, the design will compare not only the research performance of institutions but will include teaching & learning as well as other aspects of university performance.

The different rankings will be targeted at different stakeholders: They will support decision-making in universities and especially better informed study decisions by students. Rankings that create transparency for prospective students should promote access to higher education.

The University World News, in their report out today on the announcement, notes:

Testing will take place next year and must include a representative sample of at least 150 institutions with different missions in and outside Europe. At least six institutions should be drawn from the six large EU member states, one to three from the other 21, plus 25 institutions in North America, 25 in Asia and three in Australia.

There are multiple logics and politics at play here. On the one hand, a European ranking system may well give the European Commission more HE  governance capacity across Europe, strengthening its steering over national systems in areas like ‘internationalization’ and ‘regional engagement’ – two key areas that have been identified for work to be undertaken by CHERPA.

On the other hand, this new European ranking  system — when realized — might also appeal to countries in Latin America, Africa and Asia who currently do not feature in any significant way in the two dominant systems. Like the Bologna Process, the CHERPA ranking system might well find itself generating ‘echoes’ around the globe.

Or, will regions around the world prefer to develop and promote their own niche ranking systems, elements of which were evident in the QS.com Asia ranking that was recently launched.  Whatever the outcome, as we have observed before, there is a thickening industry with profits to be had on this aspect of the emerging global higher education landscape.

Susan Robertson

CRELL: critiquing global university rankings and their methodologies

This guest entry has been kindly prepared for us by Beatrice d’Hombres and Michaela Saisana of the EU-funded Centre for Research on Lifelong Learning (CRELL) and Joint Research Centre. This entry is part of a series on the processes and politics of global university rankings (see herehere, here and here).

beatriceSince 2006, Beatrice d’Hombres has been working in the Unit of Econometrics and Statistics of the Joint Research Centre of  the European Commission. She is part of the Centre for Research on Lifelong Learning. Beatrice is an economist who completed a PhD at the University of Auvergne (France). She has a particular expertise in education economics and applied econometrics.

michaela

Michaela Saisana works for the Joint Research Centre (JRC) of the European Commission at the Unit of Econometrics and Applied Statistics. She has a PhD in Chemical Engineering and in 2004 she won the European Commission – JRC Young Scientist Prize in Statistics and Econometrics for her contribution on the robustness assessment of composite indicators and her work on sensitivity analysis.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The expansion of the access to higher education, the growing mobility of students, the need for economic rationale behind the allocation of public funds, together with the demand for higher accountability and transparency, have all contributed to raise the need for comparing university quality across countries.

The recognition of this fact has also been greatly stirred  by the publication, since 2003, of the ‘Shanghai Jiao Tong University Academic Ranking of World Universities’ (henceforth SJTU), which measures university research performance across the world. The SJTU ranking tends to reinforce the evidence that the US is well ahead of Europe in terms of cutting-edge university research.

Its rival is the ranking computed annually, since 2004, by the Times Higher Education Supplement (henceforth THES). Both these rankings are now receiving worldwide attention and constitute an occasion for national governments to comment on the relative performances of their national universities.

In France, for example, the publication of the SJTU is always associated with a surge of articles in newspapers which either bemoan  the poor performance of French universities or denounce the inadequacy of the SJTU ranking to properly assess the attractiveness of the fragmented French higher education institutions landscape (see Les Echos, 7 August 2008).

Whether the intention of the rankers or not, university rankings have followed a destiny of their own and are used by national policy makers to stimulate debates about national university systems and ultimately can lead to specific education policies orientations.

At the same time, however, these rankings are subject to a plethora of criticism. They outline that the chosen indicators are mainly based on research performance with no attempt to take into account the others missions of universities (in particular teaching), and are biased towards large, English-speaking and hard-science institutions. Whilst the limitations of the indicators underlying the THES or the SJTU rankings have been extensively discussed in the relevant literature, there has been no attempt so far to examine in depth the volatility of the university ranks to the methodological assumptions made in compiling the rankings.

crell3The purpose of the JRC/Centre for Research on Lifelong Learning (CRELL) report is to fill in this gap by quantifying how much university rankings depend on the methodology and to reveal whether the Shanghai ranking serves the purposes it is used for, and if its immediate European alternative, the British THES, can do better.

To that end, we carry out a thorough uncertainty and sensitivity analysis of the 2007 SJTU and THES rankings under a plurality of scenarios in which we activate simultaneously different sources of uncertainty. The sources cover a wide spectrum of methodological assumptions (set of selected indicators, weighting scheme, and aggregation method).

This implies that we deviate from the classic approach – also taken in the two university ranking systems – to build a composite indicator by a simple weighted summation of indicators. Subsequently, a frequency matrix of the university ranks is calculated across the different simulations. Such a multi-modeling approach and the presentation of the frequency matrix, rather than the single ranks, allows one to deal with the criticism, often made to league tables and rankings systems ,that ranks are presented as if they were calculated under conditions of certainty while this is rarely the case.  crell

The main findings of the report are the following. Both rankings are only robust in the identification of the top 15 performers on either side of the Atlantic, but unreliable on the exact ordering of all other institutes. And, even when combining all twelve indicators in a single framework, the space of the inference is too wide for about 50 universities of the 88 universities we studied and thus no meaningful rank can be estimated for those universities. Finally, the JRC report suggests that THES and SJTU rankings should be improved along two main directions:

  • first, the compilation of university rankings should always be accompanied by a robustness analysis based on a multi-modeling approach. We believe that this could constitute an additional recommendation to be added to the already 16 existing Berlin Principles;
  • second, it is necessary to revisit the set of indicators, so as to enrich it with other dimensions that are crucial to assessing university performance and which are currently missing.

Beatrice d’Hombres  and Michaela Saisana

Ranking – in a different (CHE) way?

uwe_brandenburg_2006-005nl GlobalHigherEd has been profiling a series of entries on university rankings as an emerging industry and technology of governance. This entry has been kindly prepared for us by Uwe Brandenburg. Since 2006 Uwe has been project manager at the Centre for Higher Education Development (CHE) and CHE Consult, a think tank and consultancy focusing on higher education reform.  Uwe has an MA in Islamic Studies, Politics and Spanish from the University of Münster (Germany),  and an MscEcon in Politics from the University of Wales at Swansea.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Talking about rankings usually means talking about league tables. Values are calculated based on weighed indicators which are then turned into a figure, added and formed into an overall value, often with the index of 100 for the best institution counting down. Moreover, in many cases entire universities are compared and the scope of indicators is somewhat limited. We at the Centre for Higher Education Development (CHE) are highly sceptical about this approach. For more than 10 years we have been running our own ranking system which is so different to the point that  some experts  have argued that it might not be a ranking at all which is actually not true. Just because the Toyota Prius is using a very different technology to produce energy does not exclude it from the species of automobiles. What are then the differences?

uwe1

Firstly, we do not believe in the ranking of entire HEIs. This is mainly due to the fact that such a ranking necessarily blurs the differences within an institution. For us, the target group has to be the starting point of any ranking exercise. Thus, one can fairly argue that it does not help a student looking for a physics department to learn that university A is average when in fact the physics department is outstanding, the sociology appalling and the rest is mediocre. It is the old problem of the man with his head in the fire and the feet in the freezer. A doctor would diagnose that the man is in a serious condition while a statistician might claim that over all he is doing fine.

So instead we always rank on the subject level. And given the results of the first ExcellenceRanking which focused on natural sciences and mathematics in European universities with a clear target group of prospective Master and PhD students, we think that this proves the point;  only 4 institutions excelled in all four subjects; another four in three; while most excelled in only one subject. And this was in a quite closely related field.

uwe2

Secondly, we do not create values by weighing indicators and then calculating an overall value. Why is that? The main reason is that any weight is necessarily arbitrary, or in other words political. The person weighing decides which weight to give. By doing so, you pre-decide the outcome of any ranking. You make it even worse when you then add the different values together and create one overall value because this blurs differences between individual indicators.

Say a discipline is publishing a lot but nobody reads it. If you give publications a weight of 2 and citations a weight of one, it will look like the department is very strong. If you do it the other way, it will look pretty weak. If you add the values you make it even worse because you blur the difference between both performances. And those two indicators are even rather closely related. If you summarize results from research indicators with reputation indicators, you make things entirely irrelevant.

Instead, we let the indicator results stand for their own and let the user decide what is important for his or her personal decision-making process. e.g., in the classical ranking we allow the users to create “my ranking” so they can choose the indicators they want to look at and in which order.

Thirdly, we strongly object to the idea of league tables. If the values which create the table are technically arbitrary (because of the weighing and the accumulation), the league table positions create the even worse illusion of distinctive and decisive differences between places. They then bring alive the impression of an existing difference in quality (no time or space here to argue the tricky issue of what quality might be) which is measurable to the percentage point. In other words, that there is a qualitative and objectively recognizable measurable difference between place number 12 and 15. Which is normally not the case.

Moreover, small mathematical differences can create huge differences in league table positions. Take the THES QS: even in the subject cluster SocSci you find a mere difference of 4.3 points on a 100 point scale between league rank 33 and 43. In the overall university rankings, it is a meager 6.7 points difference between rank 21 and 41 going down to a slim 15.3 points difference between rank 100 and 200. That is to say, the league table positions of HEIs might differ by much less than a single point or less than 1% (of an arbitrarily set figure). Thus, it tells us much less than the league position suggests.

Our approach, therefore, is to create groups (top, middle, bottom) which are referring to the performance of each HEI relative to the other HEIs.

uwe3

This means our rankings are not as easily read as the others. However,  we strongly believe in the cleverness of the users. Moreover, we try to communicate at every possible level that every ranking (and therefore also ours) is based on indicators which are chosen by the ranking institution. Consequently, the results of the respective ranking can tell you something about how an HEI performs in the framework of what the ranker thinks interesting, necessary, relevant, etc. Rankings therefore NEVER tell you who is the best but maybe (depending on the methodology) who is performing best (or in our cases better than average) in aspects considered relevant by the ranker.

A small, but highly relevant aspect might be added here. Rankings (in the HE system as well as in other areas of life) might suggest that a result in an indicator proves that an institution is performing well in the area measured by the indicator. Well it does not. All an indicator does is hint at the fact that given the data is robust and relevant, the results give some idea of how close the gap is between the performance of the institution and the best possible result (if such a benchmark exists). The important word is “hint” because “indicare” – from which the word “indicator” derives – means exactly this: a hint, not a proof. And in the case of many quantitative indicators, the “best” or “better” is again a political decision if the indicator stands alone (e.g. are more international students better? Are more exchange agreements better?).

This is why we argue that rankings have a useful function in terms of creating transparency if they are properly used, i.e. if the users are aware of the limitations, the purpose, the target groups and the agenda of the ranking organization and if the ranking is understood as one instrument among various others fit to make whatever decision related to an HEI (study, cooperation, funding, etc.).

Finally, modesty is maybe what a ranker should have in abundance. Running the excellence ranking in three different phases (initial in 2007, second phase with new subjects right now, repetition of natural sciences just starting) I am aware of certainly one thing. However strongly we aim at being sound and coherent, and however intensely we re-evaluate our efforts, there is always the chance of missing something; of not picking an excellent institution. For the world of ranking, Einstein’s conclusion holds a lot of truth:

Not everything that can be counted, counts and not everything that counts can be counted.

For further aspects see:
http://www.che-ranking.de/cms/?getObject=47&getLang=de
http://www.che-ranking.de/cms/?getObject=44&getLang=de
Federkeil, Gero, Rankings and Quality Assurance in Higher Education, in: Higher Education in Europe, 33, (2008), S. 209-218
Federkeil, Gero, Ranking Higher Education Institutions – A European Perspective., in: Evaluation in Higher Education, 2, (2008), S. 35 – 52
Other researchers specialising in this (and often referring to our method) are e.g. Alex Usher, Marijk van der Wende or Simon Marginson.

Uwe Brandenburg

University institutional performance: HEFCE, UK universities and the media

deem11 This entry has been kindly prepared by Rosemary Deem, Professor of Sociology of Education, University of Bristol, UK. Rosemary’s expertise and research interests are in the area of higher education, managerialism, governance, globalization, and organizational cultures (student and staff).

Prior to her appointment at Bristol, Rosemary was Dean of Social Sciences at the University of Lancaster. Rosemary has served as a member of ESRC Grants Board 1999-2003, and Panel Member of the Education Research Assessment Exercise 1996, 2001, 2008.

GlobalHigherEd invited Rosemary to respond to one of the themes (understanding institutional performance) in the UK’s Higher Education Debate aired by the Department for Innovation, Universities and Skills  (DIUS) over 2008.

~~~~~~~~~~~~~~

Institutional performance of universities and their academic staff and students is a very topical issue in many countries, for potential students and their families and sponsors, governments and businesses. As well as numerous national rankings, two annual international league tables in particular, the Shanghai Jiao Tong,  developed for the Chinese government to benchmark its own universities and the commercial Times Higher top international universities listings, are the focus of much government and institutional  interest,  as  universities vie with each other to appear in the top rankings of so-called world-class universities, even though the quest for world-class status has negative as well as positive consequences for national higher education systems (see here).

International league tables often build on metrics that are themselves international (e.g publication citation indexes) or use proxies for quality such as the proportions of international students or staff/student ratios, whereas national league tables tend to develop their own criteria, as the UK Research Assessment Exercise (RAE) has done and as its planned replacement, the Research Excellence Framework is intended to do. deem2

In March 2008, John Denham, Secretary of State for (the Department of) Innovation, Universities and Skills (or DIUS) commissioned the Higher Education Funding Council for England (HEFCE) to give some advice on measuring institutional performance. Other themes  on which the Minister commissioned advice, and which will be reviewed on GlobalHigherEd over the next few months, were On-Line Higher Education Learning, Intellectual Property and research benefits; Demographic challenge facing higher education; Research Careers; Teaching and the Student Experience; Part-time studies and Higher Education; Academia and public policy making; and International issues in Higher Education.

Denham identified five policy areas for the report on ‘measuring institutional performance’ that is the concern of this entry, namely: research, enabling business to innovate and engagement in knowledge transfer activity, high quality teaching, improving work force skills and widening participation.

This list could be seen as a predictable one since it relates to current UK government policies on universities and strongly emphasizes the role of higher education in producing employable graduates and relating its research and teaching to business and the ‘knowledge economy’.

Additionally, HEFCE already has quality and success measures and also surveys, such as the National Student Survey of all final year undergraduates for everything except workforce development.  The five areas are a powerful indicator of what government thinks the purposes of universities are, which is part of a much wider debate (see here and here).

On the other hand, the list is interesting for what it leaves out – higher education institutions and their local communities (which is not just about servicing business), or universities’ provision for supporting the learning of their own staff (since they are major employers in their localities) or the relationship between teaching and research

The report makes clear that HEFCE wants to “add value whilst minimising the unintended consequences”, (p. 2), would like to introduce a code of practice for the use of performance measures and does not want to introduce more official league tables in the five policy areas.  There is also a discussion about why performance is measured: it may be for funding purposes, to evaluate new policies, inform universities so they can make decisions about their strategic direction, improve performance or to inform the operation of markets. The disadvantages of performance measures, the tendency for some measures to be proxies (which will be a significant issue if plans to use metrics and bibliometrics  as proxies for research quality in  the new Research Excellence Framework are adopted) and the tendency to measure activity and volume but not impact are also considered in the report.

However, what is not emphasized enough are that the consequences once a performance measure is made public are not within anyone’s control.  Both the internet and the media ensure that this is a significant challenge.  It is no good saying that “Newspaper league tables do not provide an accurate picture of the higher education sector” (p 7) but then taking action which invalidates this point.

Thus in the RAE 2008, detailed cross-institutional results were made available by HEFCE to the media before they are available to the universities themselves last week, just so that newspaper league tables can be constructed.

Now isn’t this an example of the tail wagging the dog, and being helped by HEFCE to do so? Furthermore, market and policy incentives may conflict with each other.  If an institution’s student market is led by middle-class students with excellent exam grades, then urging them to engage in widening participation can fall on deaf ears.   Also, whilst UK universities are still in receipt of significant public funding, many also generate substantial private funding too and some institutional heads are increasingly irritated by tight government controls over what they do and how they do it.

Two other significant issues are considered in the report. One is value-added measures, which HEFCE feels it is not yet ready to pronounce on.  Constructing these for schools has been controversial and the question of over what period should value added measures be collected is problematic, since HEFCE measures would look only at what is added to recent graduates, not what happens to them over the life course as a whole.

The other issue is about whether understanding and measuring different dimensions of institutional performance could help to support diversity in the sector.  It is not clear how this would work for the following three reasons:

  1. Institutions will tend to do what they think is valued and has money attached, so if the quality of research is more highly valued and better funded than quality of teaching, then every institution will want to do research.
  2. University missions and ‘brands’ are driven by a whole multitude of factors and importantly by articulating the values and visions of staff and students and possibly very little by ‘performance’ measures; they are often appealing to an international as well as a national audience and perfect markets with detailed reliable consumer knowledge do not exist in higher education.
  3. As the HEFCE report points out, there is a complex relationship between research, knowledge transfer, teaching, CPD and workforce development in terms of economic impact (and surely social and cultural impact too?). Given that this is the case, it is not evident that encouraging HEIs to focus on only one or two policy areas would be helpful.

There is a suggestion in the report that web-based spidergrams based on an seemingly agreed (set of performance indicators might be developed which would allow users to drill down into more detail if they wished). Whilst this might well be useful, it will not replace or address the media’s current dominance in compiling league tables based on a whole variety of official and unofficial performance measures and proxies. Nor will it really address the ways in which the “high value of the UK higher education ‘brand’ nationally and internationally” is sustained.

Internationally, the web and word of mouth are more critical than what now look like rather old-fashioned performance measures and indicators.  In addition, the economic downturn and the state of the UK’s economy and sterling are likely to be far more influential in this than anything HEFCE does about institutional performance.

The report, whilst making some important points, is essentially introspective, fails to sufficiently grasp how some of its own measures and activities are distorted by the media, does not really engage with the kinds of new technologies students and potential students are now using (mobile devices, blogs, wikis, social networking sites, etc) and focuses far more on national understandings of institutional performance than on how to improve the global impact and understanding of UK higher education.

Rosemary Deem

‘University Systems Ranking (USR)’: an alternative ranking framework from EU think-tank

One of the hottest issues out there still continuing to attract world-wide attention is university rankings. The two highest profile ranking systems, of course, are the Shanghai Jiao Tong and the Times Higher rankings, both of which focus on what might constitute a world class university, and on the basis of that, who is ranked where. Rankings are also part of an emerging niche industry. All this of course generates a high level of institutional, national, and indeed supranational (if we count Europe in this) angst about who’s up, who’s down, and who’s managed to secure a holding position. And whilst everyone points to the flaws in these ranking systems, these two systems have nevertheless managed to capture the attention and imagination of the sector as a whole. In an earlier blog enty this year GlobalHigherEd mused over why European-level actors had not managed to produce an alternate system of university rankings which might counter the hegemony of the powerful Shanghai Jiao Tong (whose ranking system privileges the US universities) on the one hand, and act as a policy lever that Europe could pull to direct the emerging European higher education system, on the other.

Yesterday The Lisbon Council, an EU think-tank (see our entry here for a profile of this influential think-tank) released which might be considered a challenge to the Shanghai Jiao Tong and Times Higher ranking schemes – a University Systems Ranking (USR) in their report University Systems Ranking Citizens and Society in the Age of Knowledge. The difference between this ranking system and the Shanghai and Times is that it focuses on country-level data and change, and not  individual institutions.

The USR has been developed by the Human Capital Center at The Lisbon Council, Brussels (produced with support by the European Commission’s Education, Audiovisual and Culture Executive Agency) with advice from the OECD.

The report begins with the questions: why do we have university systems? What are these systems intended to do? And what do we expect them to deliver – to society, to individuals and to the world at large? The underlying message in the USR is that “a university system has a much broader mandate than producing hordes of Nobel laureates or cabals of tenure – and patent bearing professors” (p. 6).

So how is the USR different, and what might we make of this difference for the development of universities in the future? The USR is based on six criteria:

  1. Inclusiveness – number of students enrolled in the tertiary sector relative to the size of its population
  2. Access – ability of a country’s tertiary system to accept and help advance students with a low level of scholastic aptitude
  3. Effectiveness – ability of country’s education system to produce graduates with skills relevant to the country’s labour market (wage premia is the measure)
  4. Attractiveness – ability of a country’s system to attract a diverse range of foreign students (using the top 10 source countries)
  5. Age range – ability of a country’s tertiary system to function as a lifelong learning institution (share of 30-39 year olds enrolled)
  6. Responsiveness – ability of the system to reform and change – measured by speed and effectiveness with which Bologna Declaration accepted (15 of 17 countries surveyed have accepted the Bologna criteria.

These are then applied to 17 OECD countries (all but 2 signatories of the Bologna Process). A composite ranging is produced, as well as rankings on each of the criteria. So what were the outcomes for the higher education systems of these 17 countries?

Drawing upon all 6 criteria, a composite figure of USR is then produced. Australia is ranked 1st; the UK 2nd and Denmark 3rd, whilst Austria and Spain are ranked 16th and 17th respectively (see Table1 below). We can also see rankings based on specific criteria (Table 2 below).

thelisboncouncil1

thelisboncouncil2

There is much to be said for this intervention by The Lisbon Council – not the least being that it opens up debates about the role and purposes of universities. Over the past few months there have been numerous heated public interventions about this matter – from whether universities should be little more than giant patenting offices to whether they should be managers of social justice systems.

And though there are evident shortcomings (such as the lack of clarity about what might count as a university; the view that a university-based education is the most suitable form of education to produce a knowledge-based economy and society; what is the equity/access etc range within any one country, and so on), the USR does, at least, place issues like ‘lifelong learning’, ‘access’ and ‘inclusion’ on the reform agenda for universities across Europe. It also sends a message that it has a set of values that currently are not reflected in the two key ranking systems that it would like to advance.

However, the big question now is whether universities will see value in this kind of ranking system for its wider systemic, as opposed to institutional, possibilities, even if it is as a basis for discussing what are universities for and how might we produce more equitable knowledge societies and economies.

Susan Robertson and Roger Dale

Benchmarking ‘the international student experience’

GlobalHigherEd has carried quite a few entries on benchmarking practices in the higher education sector over the past few month – the ‘world class’ university, the OECD innovation scoreboards, the World Bank’s Knowledge Assessment Methodology, Programme of International Student Assessment, and so on.

University World News this week have just reported on an interesting new development in international benchmarking practices – at least for the UK – suggesting, too, that the benchmarking machinery/industry is itself big business and likely to grow.

According to the University World News, the International Graduate Insight Group (or i-graduate) last week unveiled a study in the UK to:

…compare the expectations and actual experiences of both British and foreign students at all levels of higher education across the country. The Welsh Student Barometer will gather the opinions of up to 60,000 students across 10 Welsh universities and colleges. i-graduate will benchmark the results of the survey so that each university can see how its ability to match student expectations with other groupings of institutions, not only in Wales but also the rest of the world.

i-graduate markets itself as:

an independent benchmarking and research service, delivering comparative insights for the education sector worldwide: your finger on the pulse of student and stakeholder opinion.

We deliver an advanced range of dedicated market research and consultancy services for the education sector. The i-graduate network brings international insight, risk assessment and reassurance across strategy and planning, recruitment, delivery and relationship management.

i-graduate.jpg i-graduate have clearly been busy amassing information on ‘the international student experience’. It has collected responses from more than 100,000 students from over 90 countries by its International Student Barometer (ISB)- which they describe as the first truly global benchmark of the student experience. This information is packaged up (for a price) in multiple ways for different audiences, including leading UK universities. According to -i-graduate, the ISB is:

a risk management tool, enabling you to track expectations against the experiences of international students. The ISB isolates the key drivers of international student satisfaction and establishes the relative importance of each – as seen through the eyes of your students. The insight will tell you how expectations and experience affect their loyalty, their likelihood to endorse and the extent to which they would actively encourage or deter others.

Indexes like this, either providing information about one’s location in the hierarchy or as strategic information on brand loyalty, acts as a kind of disciplining and directing practice.

Those firms producing these indexes and barometers, like i-graduate, are also in reality packaging particular kinds of ‘knowledge’ about the sector and selling in the sector. In a recent seminar ESRC-funded seminar series on Changing Cultures of Competitiveness, Dr. Ngai-Ling Sum described these firms as brokering a ‘knowledge brand’ – a trade-marked, for a price, bundle of strategies/tools and insights intended to alter an individual’s, institution’s or nation’s practices, in turn leading to greater competitiveness – a phenomenon she tags to practices that are involved in producing the Knowledge-Based Economy (KBE).

It will be interesting to look more closely at, and report in a future blog on, what the barometer is measuring. For it is the specific socio-economic and political content of these indexes and barometers, as well as the disciplining and directing practices involved, which are important for understanding the direction of global higher education.

Susan Robertson

OECD ministers meet in January to discuss possible evaluation of “outcomes” of higher education

Further to our last entry on this issue, and a 15 November 2007 story in The Economist, here is an official OECD summary of the Informal OECD Ministerial Meeting on evaluating the outcomes of Higher Education, Tokyo, 11-12 January 2008.  The meetings relate to the perception, in the OECD and its member governments, of an “increasingly significant role of higher education as a driver of economic growth and the pressing need for better ways to value and develop higher education and to respond to the needs of the knowledge society”.

Producing the global knowledge economy: the World Bank and the KAM

What it means to either talk about, or indeed ‘produce’, a knowledge-based economy (KBE) is a bit like nailing jelly to the wall; it is dam slippery stuff! Part of the problem, of course, is that like all powerful metaphors, the KBE has a lot of political work to do, and it is powerful precisely because it can do that political work. It has something in it for everyone, whatever one’s politics.

Over 2008, GlobalHigherEd will run a series of analytical pieces making sense of the various players, projects and politics who seem to be involved in the production of a knowledge-based economy–from programs being developed by the World Bank, OECD and World Economic Forum, to knowledge spaces that include knowledge incubators such as Futurelab, local art spaces and cyberspace. Contributions to this theme from fellow bloggers out there, as always, are more than welcome.

We begin this series with the World Bank who, since 1998, have been busy undergoing a major ‘makeover’ – re-representing itself not as a ‘development bank’ but a ‘knowledge bank’. This move, under the leadership of Bank President James Wolfensohn, took seriously the idea that how we managed knowledge was important, and that knowledge was a key factor in technological creation, adoption and communication.

One outcome of the Bank’s move was the Knowledge For Development (or K4D) Program aimed at helping developing countries capitalize on the ‘knowledge revolution’. Specifically, developing (and also developed) countries are challenged to plan appropriate investments in human capital, effective institutions, relevant technologies, and innovative and competitive enterprises.

These challenges are then translated into the four pillars of a knowledge-based economy comprising:

  • an ‘economic and institutional regime’ which values efficiency and entrepreneurship
  • an ‘educated’ population
  • an efficient ‘innovation’ system
  • an ‘information and communication technology’ infrastructure

The four pillars feed into the Bank’s Knowledge Assessment Methodology – or KAM – an interactive benchmarking tool which now consists of 83 structural and qualitative variables for 140 countries around the globe to measure performance on the KE pillars against an imagined perfect score.

A Knowledge Economy Index (KEI) is generated giving an overall score, though scores can be broken down around each of the four pillars. Development advice is then fed out around a series of ‘product lines’.

The simplest ‘product line’ is a ‘do-it-yourself’ assessment of your economy in relation to either; all countries, others in the region, income, and so on. The user is also able to either generate a Basic Scorecard using around 14 key variables, or move to more complex representations that are based on respectively combinations of the 81 variables, the performance scores of all countries, comparisons over time, cross country comparisons, and so on. us-china-india.jpg

Other ‘product lines’ include the Bank doing policy reports for specific countries (for example, El Salvador, Turkey, Morocco), comprehensive assessments (for example India, China, Korea, Chile, the African region), and running learning events to exchange best practice. GlobalHigherEd’s blog on the reform of the Malaysian higher education system following a World Bank’s 2007  review is a good example of how the KAM is being used to reshape higher education policy and practice.

At one level this is fun. However this is a very serious business – as the benchmarking works like a learning tool. You learn where you are in this imagined perfect knowledge economy, and then strategize as to how to get to your preferred position using the pillars as policy guides and levers. top-ten-country-comparison.jpg

Benchmarking, ranking and other kinds of league tables are becoming more and more popular as tools for promoting particular kinds of learning among institutions, nations and regions. GlobalHigherEd has been profiling some of these – for instance PISA, the Programme for International Student Assessment, the OECD’s Innovation Scoreboard, and University Rankings.

Like all of these systems of ranking and benchmarking, the most interesting issue with the World Bank’s Knowledge Assessment Methodology is what is being measured, why, and with what likely outcomes? Leaving aside the thorny issue of the efficacy of the indicators for the moment (such as the Human Development Index which is one of the 83 indicators making up the KAM), as we run through all 83 indicators, we get a quick sense of the political nature of the project; the production of a world order that values global trade, has few bans on imports and licensing, strong protections in place for intellectual property (IP), a system for ensuring payments for royalties and IP across borders, high levels of adult literacy, landlines and computers to support global connectivity, and so on. Absent in this list of indicators are ways of representing unpaid labor, alternative systems of knowledge production, cultural knowledges, and so on.

The developed Western economies are more likely to be advantaged by this kind of economy – given their interest in extending their services sectors globally and securing greater returns from the high end of the value chain. However, in areas like education, the policy levers are still rather crude. It is difficult to see, for instance, how investments in higher education per se will generate those innovative, creative and entrepreneurial individuals who are regarded as the engines of this new economy.

Susan Robertson

Has audit culture in higher education, at least at the national scale, not (yet) come to Canada?

Has audit culture in higher education, at least at the national scale, not (yet) come to Canada? This is an issue that caught the eye of the Chronicle of Higher Education today; one that ties back to our 17 September posting on internationalization in Canada, and the perceived (according to the Association of Universities and Colleges of Canada) lack of a “coherent” national strategy on this front. It is noteworthy that institutions as diverse as the OECD, the Canadian Federation of Students, the Canadian Council on Learning, and the Association of Universities and Colleges of Canada have all expressed concern, over the last few weeks, about the national higher education data gap; a gap that limits the capacity for analysts, advocates, and policy-makers to understand what is going on within the country’s higher education system (see also our report this week on how it affects Canada and international student mobility strategies). This data gap then makes it difficult to compare the Canadian system on an international scale. These two tables from the recent Education at a Glance 2007: OECD Indicators report provide striking examples of what the above institutions are concerned about (“m” = data is not available).

oecdcdndatagap.jpg

canadaoecddatagap.jpg

Note: the OECD report (p. 54) states that a “traditional university degree is associated with completion of “type A” tertiary courses; “type B” generally refers to shorter and often vocationally oriented courses”.

The creation of new forms of internationally comparable data is a foundation of national and increasingly global governance (witness the power of the OECD to frame debates and policy shifts), including for the restructuring of higher education systems. International comparative data also provides the fuel for institutions as diverse as faculty unions through to boards of trade to create pressure on governments and other stakeholders to reshape higher education systems. It will be interesting to see how these debates unfold in Canada, complicated as they are by provincial jurisdiction over education, but in a context where global competition is becoming a mantra and force for change, for good and for bad.

Kris Olds