Last week marked another burst of developments in the world university rankings sector, including two ‘under 50’ rankings. More specifically:
- 29 May 2012: QS launches QS Top 50 Under 50
- 31 May 2012: Times Higher Education (with Thomson Reuters) launches THE 100 Under 50
A coincidence? Very unlikely. But who was first with the idea, and why would the other ranker time their release so closely? We don’t know for sure, but we suspect the originator of the idea was Times Higher Education (with Thomson Reuters) as their outcome was formally released second. Moreover, the data analysis phase for the production of the THE 100 Under 50 was apparently “recalibrated” whereas the QS data and methodology was the same as their regular rankings – it just sliced the data different way. But you never know, for sure, especially given Times Higher Education‘s unceremonious dumping of QS for Thomson Reuters back in 2009.
Speaking of competition and cleavages in the world university rankings world, it is noteworthy that India’s University Grants Commission announced, on the weekend, that:
Foreign universities entering into agreement with their Indian counterparts for offering twinning programmes will have to be among the global top 500.
The Indian varsities on the other hand, should have received the highest accreditation grade, according to the new set of guidelines approved by University Grants Commission today.
“The underlining objective is to ensure that only quality institutes are permitted for offering the twinning programmes to protect the interest of the students,” a source said after a meeting which cleared the regulations on twinning programmes.
They said foreign varsities entering into tie-ups with Indian partners should be ranked among the top 500 by the Times Higher Education World University Ranking or by Shanghai Jiaotong University of the top 500 universities [now deemed the Academic Ranking of World Universities].
Why does this matter? We’d argue that it is another sign of the multi-sited institutionalization of world university rankings. And institutionalization generates path dependency and normalization. When more closely tied to the logic of capital, it also generates uneven development meaning that there are always winners and losers in the process of institutionalizing a sector. In this case the world’s second most populous country, with a fast growing higher education system, will be utilizing these rankings to mediate which universities (and countries) linkages can be formed with.
Now, there are obvious pros and cons to the decision made by India’s University Grants Commission, including reducing the likelihood that ‘fly-by-night’ operations and foreign for-profits will be able to link up with Indian higher education institutions when offering international collaborative degrees. This said, the establishment of such guidelines does not necessarily mean they will be implemented. But this news item from India, related news from Denmark and the Netherlands regarding the uses of rankings to guide elements of immigration policy (see ‘What if I graduated from Amherst or ENS de Lyon…; ‘DENMARK: Linking immigration to university rankings‘), as well as the emergence of the ‘under 50’ rankings, are worth reflecting on a little more. Here are two questions we’d like to leave you with.
First, does the institutionalization of world university rankings increase the obligations of governments to analyze the nature of the rankers? As in the case of ratings agencies, we would argue more needs to be known about the rankers, including their staffing, their detailed methodologies, their strategies (including with respect to monetization), their relations with universities and government agencies, potential conflicts of interest, so on. To be sure, there are some very conscientious people working on the production and marketing of world university rankings, but these are individuals, and it is important to set the rules of the game up so that a fair and transparent system exists. After all, world university rankers contribute to the generation of outcomes yet do not have to experience the consequences of said outcomes.
Second, if government agencies are going to use such rankings to enable or inhibit international linkage formation processes, not to mention direct funding, or encourage mergers, or redefine strategy, then who should be the manager of the data that is collected? Should it solely be the rankers? We would argue that the stakes are now too high to leave the control of the data solely in the hands of the rankers, especially given that much of it is provided for free by higher education institutions in the first place. But if not these private authorities, then who else? Or, if not who else, then what else?
While we were drafting this entry on Monday morning a weblog entry by Alex Usher (of Canada’s Higher Education Strategy Associates) coincidentally generated a ‘pingback’ to an earlier entry titled ‘The Business Side of World University Rankings.’ Alex Usher’s entry (pasted in below, in full) raises an interesting question that is worth of careful consideration not just because of the idea of how the data could be more fairly stored and managed, but also because of his suggestions regarding the process to push this idea forward:
My colleague Kris Olds recently had an interesting point about the business model behind the Times Higher Education’s (THE) world university rankings. Since 2009 data collection for the rankings has been done by Thomson Reuters. This data comes from three sources. One is bibliometric analysis, which Thomson can do on the cheap because it owns the Web of Science database. The second is a reputational survey of academics. And the third is a survey of institutions, in which schools themselves provide data about a range of things, such as school size, faculty numbers, funding, etc.
Thomson gets paid for its survey work, of course. But it also gets the ability to resell this data through its consulting business. And while there’s little clamour for their reputational survey data (its usefulness is more than slightly marred by the fact that Thomson’s disclosure about the geographical distribution of its survey responses is somewhat opaque) – there is demand for access for all that data that institutional research offices are providing them.
As Kris notes, this is a great business model for Thomson. THE is just prestigious enough that institutions feel they cannot say no to requests for data, thus ensuring a steady stream of data which is both unique and – perhaps more importantly – free. But if institutions which provide data to the system want any data out of this it again, they have to pay.
(Before any of you can say it: HESA’s arrangement with the Globe and Mail is different in that nobody is providing us with any data. Institutions help us survey students and in return we provide each institution with its own results. The Thomson-THE data is more like the old Maclean’s arrangement with money-making sidebars).
There is a way to change this. In the United States, continued requests for data from institutions resulted in the creation of a Common Data Set (CDS); progress on something similar has been more halting in Canada (some provincial and regional ones exist but we aren’t yet quite there nationally). It’s probably about time that some discussions began on an international CDS. Such a data set would both encourage more transparency and accuracy in the data, and it would give institutions themselves more control over how the data was used.
The problem, though, is one of co-ordination: the difficulties of getting hundreds of institutions around the world to co-operate should not be underestimated. If a number of institutional alliances such as Universitas 21 and the Worldwide Universities Network, as well as the International Association of Universities and some key university associations were to come together, it could happen. Until then, though, Thomson is sitting on a tidy money-earner.
While you could argue about the pros and cons of the idea of creating a ‘global common data set,’ including the likelihood of one coming into place, what Alex Usher is also implying is that there is a distinct lack of governance regarding world university rankers. Why are universities so anemic when it comes to this issue, and why are higher education associations not filling the governance space neglected by key national governments and international organizations? One answer is that their own individual self-interest has them playing the game as long as they are winning. Another possible answer is that they have not thought through the consequences, or really challenged themselves to generate an alternative. Another is that the ‘institutional research’ experts (e.g., those represented by the Association for Institutional Research in the case of the US) have not focused their attention on the matter. But whatever the answer, at the very least, we think that they at least need to be posing themselves a set of questions. And if it’s not going to happen now, when will it? Only after MIT demonstrates some high profile global leadership on this issue, perhaps with Harvard, like it did with MITx and edX?
Kris Olds & Susan L. Robertson
Kris, Thank you for another great blog! You have left some good questions. As to the first question posed, I do think that governments who more closely analyze the nature of the ranking system and thereby the “rankers” themselves, especially if the results are not to their liking. As more higher education systems, at least in the US, are trying to turn to a more student-centered approach, I would hope that the “rankers” look more into the retention rates and student support offerings.
As to the second question, I would fear more government involvement in the higher education system, within the US or abroad. If funding is going to be given by governments/private parties to “rankers”, I expect there to be some involvement on how their research is conducted and what information they need to add to their rankings.
University rankings are not inherently bad. However, I think they should be used to better the student and their decision when choosing a college that meets their needs.
Pingback: What Would a Global Common Data Set Look Like? | HESA
I find it very interesting that there is limited governance on the global university ranking system, but can see how an international effort to standardize the system would be near impossible. The “rankers” are like food critics who are very informed, but have limited experience themselves working within the system they are evaluating. Like “rankers,” they carry an opinion that can make or break the establishment.
You raise interesting questions as to the purpose and ethics of university ranking methodologies. A transparent system should invite and encourage global discussion on the process, thereby improving the validity and reliability of the ranking methodologies. The analysis of the business model by Kris Olds was revealing and gives caution to the use of information derived from controlled sources. In contrast, his comments on the potential solution of the creation and access to common data sets were encouraging.
This was a really intersting blog. I had no idea that the international rankings organizations weren’t really monitored by anyone in particular. It seems they do just have a nice tidy little money maker. I also agree that it is not likely that higher education institutions will do anything to change the situation any time soon as long as they are all ranked as they think they should be. I do think it would be possible to create a global ranking system, but the question, as was stated in the blog, is who is best to oversee the process and own the data. Good food for thought.
I find this post pertaining to a common global ranking very interesting. It seems that there is limited governance when analyzing global university ranking system. I find this interesting because strong governance is what makes institutions run successfully. When trying to standardize a system globally, determining governance would much more difficult which is why it seems the problem has not yet been resolved.
The questions you pose are deep and thought provoking, thank you. Yes, institutions need to be transparent and why are they trying to fit this ranking into the old school format? If it is student centered don’t they need to look at technology and how the student today communicates and collaborates? If they just use a business model only, will they be in the same situation in the future? Thanks for any feedback.
I agree with D. Altieri, that standardizing an international ranking system is next to impossible because there is no authority that is recognized to oversee such standards. It would be very difficult even within our own national borders to get a common data set to evaluate university rankings, let alone to establish one globally. That being said, however, with movement towards the industrialization of higher education, rankings are going to become increasingly important, and therefore, it is crucial that we have some idea of the methodology of how these rankings are achieved. It presents a very interesting dilemma.
Thank you for the great post. I believe if we are to discuss why the ranking is going to be done in the first place, it might help to see what type of policy, information, is to be implemented, and how.
Whether the main reason for the ranking is to be used as a wake up call for other institutions to become more competent worldwide, or to be used as an informative resource for future students to help their institution selection, the World university ranking could be very resourceful, yet again, depending on who is doing the ranking and how.
Kris, I concur that a global data set for rankings would require transparency and close scrutity. The regulating agencies would have to be open to input from many organizations and institutions. In addition, could all of these groups agree on a single data set?
I Agree, there is not a set global standard for ranking to be across the board. I think everything in Higher Education these days are data driven which proves the point even further that a global standard may not be such a bad thing. Data determines where to spend precious institutional funds and dictates class sizes, hiring decisions, etc. It is important to have good data that can be shared across the board within a set standard.