Elephants in the room, and all that: more ‘reactions’ to the Bologna Process Leuven Communiqué

Editor’s Note: As those of you following GlobalHigherEd well know, the big news story of April on the higher education calender was the release of the Leuven Communiqué following the the 6th Bologna  Ministerial Conference held in Leuven/Louvain-la-Neuve,  28-29th April, 2009.

46 Bologna countries gathered together to review progress toward realizing the objectives of the Bologna Process by 2010, and to establish the priorities for the European Higher Education Area.  Prior to the meeting there was quite literally an avalanche of stocktaking reports, surveys, analyzes and other kinds of commentary, all fascinating reading (see this blog entry for a listing of materials).

With the Communiqué released, and the ambition to take the Bologna Process into the next decade under the banner – ‘The Bologna Process 2010′, GlobalHigherEd has invited leading European actors and commentators to ‘react’ to the Communiqué.

leuven 1

Last week  we posted some initial ‘reactions:  Pavel Zgaga’s Bologna: beyond 2010 and over the Ocean – but where to? and Peter Jones’ Was there a student voice in Leuven? In this entry, we add more. We invited Per Nyborg, Roger Dale, Pauline Ravinet and Anne Corbett to comment briefly on one aspect they felt warranted highlighting.

Per Nyborg was Head, Bologna Process Secretariat (2003-2005). Roger Dale, Professor of Sociology of Education, University of Bristol, UK, has written extensively on the governance of the European Higher Education Area and the role of Bologna Process in that. He recently published a co-edited volume on Globalisation and Europeanisation in Education (2009, Symposium Books). Pauline Ravinet is a post doctoral researcher at the Université Libre de Bruxelles, Belgium. She completed her doctoral research at Sciences Po, Paris, and has published extensively on the Bologna Process.  Anne Corbett is Visiting Fellow, European Institute, London School of Economics and Political Science (LSE). Dr. Corbett is author of Universities and the Europe of Knowledge Ideas, Institutions and Policy Entrepreneurship in European Union Higher Education 1955-2005 (Palgrave Macmillan, 2005).

Susan Robertson


“Bologna Toward 2010″ – Per Nyborg

In 2005, halfway toward 2010, Ministers declared that they wished to establish a European Higher Education Area (EHEA) based on the principles of quality and transparency and our rich heritage and cultural diversity. They committed themselves to the principle of public responsibility for higher education. They saw the social dimension as a constituent part of the EHEA.

The three cycles were established, each level for preparing students for the labor market, for further competence building and for active citizenship. The overarching framework for qualifications, the agreed set of European standards and guidelines for quality assurance, and the recognition of degrees and periods of study, were seen as key characteristics of the structure of the EHEA.

What has been added four years later? Ministers have called upon European higher education institutions to further internationalize their activities and to engage in global collaboration for sustainable development. Competition on a global scale will be complemented by enhanced policy dialogue and cooperation based on partnership with other regions of the world. Global cooperation and global competition may have taken priority over solidarity between the 46 partner countries. But Bologna partners outside the European Economic Area region must not be left behind!  leuven 2

A clear and concise description of the EHEA and the obligations of the participating countries is what we should expect from the 2010 ministerial conference – at least if it shall be seen as the founding conference for the EHEA, not only a Bologna anniversary on the way to 2020.


“Elephants in the Room, and All That”   Roger Dale

Reading the Leuven Communiqué, we can’t help but be impressed by the continuing emphasis on the public nature of and public responsibility for higher education that has characterized BFUG’s statements over the years. Indeed, the word ‘public’ appears 9 times.

However, at the same time we can’t help wondering about some other  words and connotations that don’t appear.

The nature of the ‘fast evolving society’ to which the EHEA is to respond implied by the Communiqué, seems rather different from that implied in some of these elephants in the room.

Quite apart from ‘private’ (which as Marek Kwiek has constantly reminded us is indispensable to the understanding of HE in many of the newer member states), we may cite the following:

  • First and foremost, ‘Lisbon’, with its dominant focus on Productivity and Growth;
  • Second, ‘European Commission’, the home and driver of Lisbon, and the indispensable paymaster and facilitator of the Bologna Process.
  • Third, the ‘European Research Area’; surely a report on European Universities/ERA would paint a rather different picture of the Universities over the next decade from that presented here.

It is difficult to see how complete and accurate a picture of Bologna, as it goes into its second phase, this ‘more of the same’ Communiqué provides. Perhaps the most pregnant phrase in the document is “Liaise with experts in other fields, such as research, immigration, social security and employment”,  a very mixed and interesting quartet, whose different demands may pose real problems of harmonization.


“The Bologna Process  – a New Institution? ” Pauline Ravinet

I have been particularly interested in my own research on early phases and subsequent institutionalization of the Bologna Process. In this work I have tried to recompose and analyze what happened between 1998–the year when the process began with unexpected Sorbonne declaration–and now, where the Bologna process has become the central governance arena for higher education in Europe. This did not happen in one day and was rather a progressive invention of a unique European structure for the coordination of national higher education policies.

Reading the Leuven Communiqué with the institutionalization question in mind is extremely interesting. Presenting the achievements of the 2000s and defining the priorities for the decade to come, this text states more explicitly than any Bologna document before, that the process has gone much further than a ten-year provisory arrangement for the attainment of common objectives.

The Bologna process is becoming an institution. It is first an institution in its most formal meaning: the Bologna process designates an original organizational structure, functioning according to specific rules, and equipped with innovating coordination tools which will not perish but on the contrary enter a new life cycle in 2010. The Bologna Policy Forum, which met on the 29th April, will be the formal group that engages with the globalization of Bologna. This represents a further new expression of the institutionalization of Bologna.  leuven bologna policy forum

But it is also an institution in a more sociological sense. The Bologna arena has acquired value and legitimacy beyond the performance of specific tasks, it embeds and diffuses a policy vision which frames the representations and strategies of higher education actors of all over Europe, and catches the interest of students, academia, and HE experts world wide.


“Fit For Purpose? ”  Anne Corbett

The European Higher Education Area is the New Decade, has European Ministers responsible for higher education, declaring (para 24) that

[t]he present organisational structure of the Bologna Process is endorsed as being fit for purpose’.

You may think this to be a boring detail. However as a political scientist, I’d argue that this is the most theoretically and politically interesting phrase in the Communiqué. In policy making terms, the Bologna decade has been about framing issues, negotiating agendas, developing policies and testing out modes of cooperation which can be accepted throughout a Europe re-united for the first time for 50 years.

These are the sorts of activities typically carried out by experts and officials who are somewhat shielded from the political process. For a time they enjoy a policy monopoly (Baumgartner and Jones 1993 – see reference below).

In terms of policy effectiveness this is all to the good. The people who devote time and thought to these issues have to build up relations of trust and respect. They don’t need politicians to harry them over half-thought out ideas.

The Bologna Follow-up Group, which devises and delivers on a work programme which corresponds to the wishes of ministers, have produced an unprecedented degree of voluntary cooperation on instruments as well as aims (European Standards and Guidelines in Quality Assurance, Qualifications Frameworks, and Stocktaking or national benchmarking), thanks to working groups which recruit quite widely, seminars etc. Almost every minister at the Leuven conference started his/her 90 second speech with tributes to the BFUG.

But there comes a time in every successful policy process when political buy-in is needed. The EHEA-to-be does not have that. Institutionally Bologna is run by ministers and their administrations, technocrats and lobbyists. Finance (never ever mentioned in any communiqué) is provided by the EU Commission, EU presidencies and the host countries of ministerial conferences (up to now EU). Records of the Bologna Process remain the property of the ministries providing the secretariat in a particular policy cycle. “It works, don’t disturb it,” is the universal message of those insiders who genuinely want advance.

Students in the streets (as opposed, as Peter Jones’ entry reminds us, to those in the Brussels-based European Student Union) are a sign that a comfortably informal process has its limits once an implementation stage is reached. It is such a well known political phenomenon that it is astonishing that sophisticated figures in the BFUG are not preparing to open the door to the idea that an EHEA needs arenas at national and European level where ministers are answerable to the broad spectrum of political opinion. Parliamentarians could be in the front line here. Will either of the European assembles or any of the 46 national parliaments take up the challenge?

Baumgartner, F. and B. Jones (1993). Agendas and instability in American politics. Chicago, University of Chicago Press.

Ranking – in a different (CHE) way?

uwe_brandenburg_2006-005nl GlobalHigherEd has been profiling a series of entries on university rankings as an emerging industry and technology of governance. This entry has been kindly prepared for us by Uwe Brandenburg. Since 2006 Uwe has been project manager at the Centre for Higher Education Development (CHE) and CHE Consult, a think tank and consultancy focusing on higher education reform.  Uwe has an MA in Islamic Studies, Politics and Spanish from the University of Münster (Germany),  and an MscEcon in Politics from the University of Wales at Swansea.


Talking about rankings usually means talking about league tables. Values are calculated based on weighed indicators which are then turned into a figure, added and formed into an overall value, often with the index of 100 for the best institution counting down. Moreover, in many cases entire universities are compared and the scope of indicators is somewhat limited. We at the Centre for Higher Education Development (CHE) are highly sceptical about this approach. For more than 10 years we have been running our own ranking system which is so different to the point that  some experts  have argued that it might not be a ranking at all which is actually not true. Just because the Toyota Prius is using a very different technology to produce energy does not exclude it from the species of automobiles. What are then the differences?


Firstly, we do not believe in the ranking of entire HEIs. This is mainly due to the fact that such a ranking necessarily blurs the differences within an institution. For us, the target group has to be the starting point of any ranking exercise. Thus, one can fairly argue that it does not help a student looking for a physics department to learn that university A is average when in fact the physics department is outstanding, the sociology appalling and the rest is mediocre. It is the old problem of the man with his head in the fire and the feet in the freezer. A doctor would diagnose that the man is in a serious condition while a statistician might claim that over all he is doing fine.

So instead we always rank on the subject level. And given the results of the first ExcellenceRanking which focused on natural sciences and mathematics in European universities with a clear target group of prospective Master and PhD students, we think that this proves the point;  only 4 institutions excelled in all four subjects; another four in three; while most excelled in only one subject. And this was in a quite closely related field.


Secondly, we do not create values by weighing indicators and then calculating an overall value. Why is that? The main reason is that any weight is necessarily arbitrary, or in other words political. The person weighing decides which weight to give. By doing so, you pre-decide the outcome of any ranking. You make it even worse when you then add the different values together and create one overall value because this blurs differences between individual indicators.

Say a discipline is publishing a lot but nobody reads it. If you give publications a weight of 2 and citations a weight of one, it will look like the department is very strong. If you do it the other way, it will look pretty weak. If you add the values you make it even worse because you blur the difference between both performances. And those two indicators are even rather closely related. If you summarize results from research indicators with reputation indicators, you make things entirely irrelevant.

Instead, we let the indicator results stand for their own and let the user decide what is important for his or her personal decision-making process. e.g., in the classical ranking we allow the users to create “my ranking” so they can choose the indicators they want to look at and in which order.

Thirdly, we strongly object to the idea of league tables. If the values which create the table are technically arbitrary (because of the weighing and the accumulation), the league table positions create the even worse illusion of distinctive and decisive differences between places. They then bring alive the impression of an existing difference in quality (no time or space here to argue the tricky issue of what quality might be) which is measurable to the percentage point. In other words, that there is a qualitative and objectively recognizable measurable difference between place number 12 and 15. Which is normally not the case.

Moreover, small mathematical differences can create huge differences in league table positions. Take the THES QS: even in the subject cluster SocSci you find a mere difference of 4.3 points on a 100 point scale between league rank 33 and 43. In the overall university rankings, it is a meager 6.7 points difference between rank 21 and 41 going down to a slim 15.3 points difference between rank 100 and 200. That is to say, the league table positions of HEIs might differ by much less than a single point or less than 1% (of an arbitrarily set figure). Thus, it tells us much less than the league position suggests.

Our approach, therefore, is to create groups (top, middle, bottom) which are referring to the performance of each HEI relative to the other HEIs.


This means our rankings are not as easily read as the others. However,  we strongly believe in the cleverness of the users. Moreover, we try to communicate at every possible level that every ranking (and therefore also ours) is based on indicators which are chosen by the ranking institution. Consequently, the results of the respective ranking can tell you something about how an HEI performs in the framework of what the ranker thinks interesting, necessary, relevant, etc. Rankings therefore NEVER tell you who is the best but maybe (depending on the methodology) who is performing best (or in our cases better than average) in aspects considered relevant by the ranker.

A small, but highly relevant aspect might be added here. Rankings (in the HE system as well as in other areas of life) might suggest that a result in an indicator proves that an institution is performing well in the area measured by the indicator. Well it does not. All an indicator does is hint at the fact that given the data is robust and relevant, the results give some idea of how close the gap is between the performance of the institution and the best possible result (if such a benchmark exists). The important word is “hint” because “indicare” – from which the word “indicator” derives – means exactly this: a hint, not a proof. And in the case of many quantitative indicators, the “best” or “better” is again a political decision if the indicator stands alone (e.g. are more international students better? Are more exchange agreements better?).

This is why we argue that rankings have a useful function in terms of creating transparency if they are properly used, i.e. if the users are aware of the limitations, the purpose, the target groups and the agenda of the ranking organization and if the ranking is understood as one instrument among various others fit to make whatever decision related to an HEI (study, cooperation, funding, etc.).

Finally, modesty is maybe what a ranker should have in abundance. Running the excellence ranking in three different phases (initial in 2007, second phase with new subjects right now, repetition of natural sciences just starting) I am aware of certainly one thing. However strongly we aim at being sound and coherent, and however intensely we re-evaluate our efforts, there is always the chance of missing something; of not picking an excellent institution. For the world of ranking, Einstein’s conclusion holds a lot of truth:

Not everything that can be counted, counts and not everything that counts can be counted.

For further aspects see:
Federkeil, Gero, Rankings and Quality Assurance in Higher Education, in: Higher Education in Europe, 33, (2008), S. 209-218
Federkeil, Gero, Ranking Higher Education Institutions – A European Perspective., in: Evaluation in Higher Education, 2, (2008), S. 35 – 52
Other researchers specialising in this (and often referring to our method) are e.g. Alex Usher, Marijk van der Wende or Simon Marginson.

Uwe Brandenburg