CHERPA-network based in Europe wins tender to develop alternative global ranking of universities

rankings 4

Finally the decision on who has won the European Commission’s million euro tender – to develop and test a  global ranking of universities – has been announced.

The successful bid – the CHERPA network (or the Consortium for Higher Education and Research Performance Assessment), is charged with developing a ranking system to overcome what is regarded by the European Commission as the limitations of the Shanghai Jiao Tong and the QS-Times Higher Education schemes. The  final product is to be launched in 2011.

CHERPA is comprised of a consortium of leading institutions in the field within Europe; all have been developing and offering rather different approaches to ranking over the past few years (see our earlier stories here, here and  here for some of the potential contenders):

Will this new European Commission driven initiative set the proverbial European cat amongst the Transatlantic alliance pigeons?  rankings 1

As we have noted in earlier commentary on university rankings, the different approaches tip the rankings playing field in the direction of different interests. Much to the chagrin of the continental Europeans, the high status US universities do well on the Shanghai Jiao Tong University Ranking, whilst Britain’s QS-Times Higher Education tends to see UK universities feature more prominently.

CHERPA will develop a design that follows the so called ‘Berlin Principles on the ranking of higher education institutions‘. These principles stress the need to take into account the linguistic, cultural and historical contexts of the educational systems into account [this fact is something of an irony for those watchers following UK higher education developments last week following a Cabinet reshuffle – where reference to ‘universities’ in the departmental name was dropped.  The two year old Department for Innovation, Universities and Skills has now been abandoned in favor of a mega-Department for Business, Innovation and Skills! (read more here)].

According to one of the Consortium members website –  CHE:

The basic approach underlying the project is to compare only institutions which are similar and comparable in terms of their missions and structures. Therefore the project is closely linked to the idea of a European classification (“mapping”) of higher education institutions developed by CHEPS. The feasibility study will include focused rankings on particular aspects of higher education at the institutional level (e.g., internationalization and regional engagement) on the one hand, and two field-based rankings for business and engineering programmes on the other hand.

The field-based rankings will each focus on a particular type of institution and will develop and test a set of indicators appropriate to these institutions. The rankings will be multi-dimensional and will – like the CHE ranking – use a grouping approach rather than simplistic league tables. In contrast to existing global rankings, the design will compare not only the research performance of institutions but will include teaching & learning as well as other aspects of university performance.

The different rankings will be targeted at different stakeholders: They will support decision-making in universities and especially better informed study decisions by students. Rankings that create transparency for prospective students should promote access to higher education.

The University World News, in their report out today on the announcement, notes:

Testing will take place next year and must include a representative sample of at least 150 institutions with different missions in and outside Europe. At least six institutions should be drawn from the six large EU member states, one to three from the other 21, plus 25 institutions in North America, 25 in Asia and three in Australia.

There are multiple logics and politics at play here. On the one hand, a European ranking system may well give the European Commission more HE  governance capacity across Europe, strengthening its steering over national systems in areas like ‘internationalization’ and ‘regional engagement’ – two key areas that have been identified for work to be undertaken by CHERPA.

On the other hand, this new European ranking  system — when realized — might also appeal to countries in Latin America, Africa and Asia who currently do not feature in any significant way in the two dominant systems. Like the Bologna Process, the CHERPA ranking system might well find itself generating ‘echoes’ around the globe.

Or, will regions around the world prefer to develop and promote their own niche ranking systems, elements of which were evident in the QS.com Asia ranking that was recently launched.  Whatever the outcome, as we have observed before, there is a thickening industry with profits to be had on this aspect of the emerging global higher education landscape.

Susan Robertson

Ranking – in a different (CHE) way?

uwe_brandenburg_2006-005nl GlobalHigherEd has been profiling a series of entries on university rankings as an emerging industry and technology of governance. This entry has been kindly prepared for us by Uwe Brandenburg. Since 2006 Uwe has been project manager at the Centre for Higher Education Development (CHE) and CHE Consult, a think tank and consultancy focusing on higher education reform.  Uwe has an MA in Islamic Studies, Politics and Spanish from the University of Münster (Germany),  and an MscEcon in Politics from the University of Wales at Swansea.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Talking about rankings usually means talking about league tables. Values are calculated based on weighed indicators which are then turned into a figure, added and formed into an overall value, often with the index of 100 for the best institution counting down. Moreover, in many cases entire universities are compared and the scope of indicators is somewhat limited. We at the Centre for Higher Education Development (CHE) are highly sceptical about this approach. For more than 10 years we have been running our own ranking system which is so different to the point that  some experts  have argued that it might not be a ranking at all which is actually not true. Just because the Toyota Prius is using a very different technology to produce energy does not exclude it from the species of automobiles. What are then the differences?

uwe1

Firstly, we do not believe in the ranking of entire HEIs. This is mainly due to the fact that such a ranking necessarily blurs the differences within an institution. For us, the target group has to be the starting point of any ranking exercise. Thus, one can fairly argue that it does not help a student looking for a physics department to learn that university A is average when in fact the physics department is outstanding, the sociology appalling and the rest is mediocre. It is the old problem of the man with his head in the fire and the feet in the freezer. A doctor would diagnose that the man is in a serious condition while a statistician might claim that over all he is doing fine.

So instead we always rank on the subject level. And given the results of the first ExcellenceRanking which focused on natural sciences and mathematics in European universities with a clear target group of prospective Master and PhD students, we think that this proves the point;  only 4 institutions excelled in all four subjects; another four in three; while most excelled in only one subject. And this was in a quite closely related field.

uwe2

Secondly, we do not create values by weighing indicators and then calculating an overall value. Why is that? The main reason is that any weight is necessarily arbitrary, or in other words political. The person weighing decides which weight to give. By doing so, you pre-decide the outcome of any ranking. You make it even worse when you then add the different values together and create one overall value because this blurs differences between individual indicators.

Say a discipline is publishing a lot but nobody reads it. If you give publications a weight of 2 and citations a weight of one, it will look like the department is very strong. If you do it the other way, it will look pretty weak. If you add the values you make it even worse because you blur the difference between both performances. And those two indicators are even rather closely related. If you summarize results from research indicators with reputation indicators, you make things entirely irrelevant.

Instead, we let the indicator results stand for their own and let the user decide what is important for his or her personal decision-making process. e.g., in the classical ranking we allow the users to create “my ranking” so they can choose the indicators they want to look at and in which order.

Thirdly, we strongly object to the idea of league tables. If the values which create the table are technically arbitrary (because of the weighing and the accumulation), the league table positions create the even worse illusion of distinctive and decisive differences between places. They then bring alive the impression of an existing difference in quality (no time or space here to argue the tricky issue of what quality might be) which is measurable to the percentage point. In other words, that there is a qualitative and objectively recognizable measurable difference between place number 12 and 15. Which is normally not the case.

Moreover, small mathematical differences can create huge differences in league table positions. Take the THES QS: even in the subject cluster SocSci you find a mere difference of 4.3 points on a 100 point scale between league rank 33 and 43. In the overall university rankings, it is a meager 6.7 points difference between rank 21 and 41 going down to a slim 15.3 points difference between rank 100 and 200. That is to say, the league table positions of HEIs might differ by much less than a single point or less than 1% (of an arbitrarily set figure). Thus, it tells us much less than the league position suggests.

Our approach, therefore, is to create groups (top, middle, bottom) which are referring to the performance of each HEI relative to the other HEIs.

uwe3

This means our rankings are not as easily read as the others. However,  we strongly believe in the cleverness of the users. Moreover, we try to communicate at every possible level that every ranking (and therefore also ours) is based on indicators which are chosen by the ranking institution. Consequently, the results of the respective ranking can tell you something about how an HEI performs in the framework of what the ranker thinks interesting, necessary, relevant, etc. Rankings therefore NEVER tell you who is the best but maybe (depending on the methodology) who is performing best (or in our cases better than average) in aspects considered relevant by the ranker.

A small, but highly relevant aspect might be added here. Rankings (in the HE system as well as in other areas of life) might suggest that a result in an indicator proves that an institution is performing well in the area measured by the indicator. Well it does not. All an indicator does is hint at the fact that given the data is robust and relevant, the results give some idea of how close the gap is between the performance of the institution and the best possible result (if such a benchmark exists). The important word is “hint” because “indicare” – from which the word “indicator” derives – means exactly this: a hint, not a proof. And in the case of many quantitative indicators, the “best” or “better” is again a political decision if the indicator stands alone (e.g. are more international students better? Are more exchange agreements better?).

This is why we argue that rankings have a useful function in terms of creating transparency if they are properly used, i.e. if the users are aware of the limitations, the purpose, the target groups and the agenda of the ranking organization and if the ranking is understood as one instrument among various others fit to make whatever decision related to an HEI (study, cooperation, funding, etc.).

Finally, modesty is maybe what a ranker should have in abundance. Running the excellence ranking in three different phases (initial in 2007, second phase with new subjects right now, repetition of natural sciences just starting) I am aware of certainly one thing. However strongly we aim at being sound and coherent, and however intensely we re-evaluate our efforts, there is always the chance of missing something; of not picking an excellent institution. For the world of ranking, Einstein’s conclusion holds a lot of truth:

Not everything that can be counted, counts and not everything that counts can be counted.

For further aspects see:
http://www.che-ranking.de/cms/?getObject=47&getLang=de
http://www.che-ranking.de/cms/?getObject=44&getLang=de
Federkeil, Gero, Rankings and Quality Assurance in Higher Education, in: Higher Education in Europe, 33, (2008), S. 209-218
Federkeil, Gero, Ranking Higher Education Institutions – A European Perspective., in: Evaluation in Higher Education, 2, (2008), S. 35 – 52
Other researchers specialising in this (and often referring to our method) are e.g. Alex Usher, Marijk van der Wende or Simon Marginson.

Uwe Brandenburg

China: from ‘emerging contender’ to ‘serious player’ in cross-border student mobility

Last year we carried a series of reports (see here, here and here) on the global distribution of student mobility. While the US and the UK had the lion’s share of this market, with 22% and 12% respectively, we noted China had made big gains. With 7% of the global market and in 6th place overall, it was an ’emerging contender’ to be taken seriously, with trends suggesting that it was a serious player as a net ‘exporter’ and importer of education services.

So it was with great interest I read today’s Chronicle of Higher Education report by reporter Mara Hvistendahl, on China now being ranked in 5th place (behind the US, UK, France and Germany) as an “importer” of foreign students. See this OECD chart, from its new Education at a Glance 2008 report, to situate this development trend and China’s current position [recall that China is not an OECD member country].

As the Chronicle report notes, this is a far cry from China’s 33 overseas students in 1950.

Given, too, that in 1997 there were only 39,000 foreign students whilst in 2007 there were some 195,000, this 5-fold increase in numbers in 10 years (Chinese Ministry of Education and the China Scholarship Council) represents a staggering achievement and the one that is likely to continue. So, how has China achieved this. According to the Chronicle report:

To attract students, China offers competitive packages, replete with living stipends, health insurance, and, sometimes, travel expenses. In 2007 the China Scholarship Council awarded 10,000 full scholarships — at a cost of 360 million yuan ($52-million) — to international students. By 2010 the council aims to double the number of awards.

Two-fifths of the 2007 grants went to students in Asia. In a separate scholarship program that reflects its global political strategy, China is using its strengths in science and technology to appeal to students in the Middle East, Africa, and Central Asia, forming partnerships with governments in those regions to sponsor students in medicine, engineering, and agriculture.

But there are other factors as well pushing China up the ladder as an education destination. China is increasing regarded as a strategic destination by American students and the US government for study abroad. Figures reported by Institute of International Education fact-sheet on student mobility to and from the US show an increase of 38% in US students going to China in just 1 year (2005/2006). This also represents a profound shift in Sino-American educational relations.

In sum, these figures reflect the outcome of an overall strategy by China (perversely aided by the US’s own global trade and diplomacy agenda):

  • to develop a world class higher education system;
  • to internationalize Chinese higher education;
  • to stem the tide of students flowing out of China;
  • to attract half a million students to China by 2020; and
  • to advance Chinese interests through higher education diplomacy.

If realized, this would put China at the top of the exporting nations along with the US. It will also register China as a global higher education player with global impact. Without doubt this will change the geo-politics of global higher education.

Susan Robertson

Graphic feed: growing global demand for higher education (2000-2025)

Source: Brandenburg, U., Carr, D., Donauer, S., Berthold, C. (2008) Analysing the Future Market – Target Countries for German HEIs, Working paper No. 107, CHE Centre for Higher Education Development, Gütersloh, Germany, p. 13.

Graphic feed: global student mobility matrix (2005)

Source: Internationalization of Higher Education: Foreign Students in Germany-German Students Abroad. Results of the 18th Social Survey of the Deutsches Studentenwerk (DSW) conducted by HIS Hochschul-Informations-System, 2008.

Update: see nanopolitan‘s interesting 4 June reflections (‘Indian’s studying abroad‘) on this table, and the changing nature of the foreign Indian student presence in the USA.

International students in the UK: interesting facts

Promoting and responding to the globalisation of the higher education sector are a myriad array of newer actors/agencies on the scene, including the UK Higher Education International Unit. Set up in 2007, the UK HE International Unit aims to provide:

credible, timely and relevant analysis to those managers engaged in internationalisation across the UK HE sector, namely – Heads of institutions, pro-Vice Chancellors for research and international activities; Heads of research/business development offices and International student recruitment & welfare officers.

The UK International Unit both publishes and profiles (with download options) useful analytical reports, as well as providing synoptic comparative pictures on international student recruitment and staff recruitment on UK higher education institutions and their competitors. Their newsletter is well worth subscribing to.

Readers of GlobalHigherEd might find the following UK HE International Unit compiled facts interesting:

  • In 2004, 2.7 million students were enrolled in HEIs outside their countries of citizenship. In 2005-06, six countries hosted 67% of these students (23% in the US, 12% in the UK, 11% in Germany, 10% in France, 7% in Australia, and 5% in Japan). (UNESCO, 2006)
  • New Zealand’s share of the global market for international students increased more than fourfold between 2000 and 2006. Australia’s increased by 58% and the UK’s by 35%. (OECD, 2006)
  • There were 223,850 international students (excluding EU) enrolled at UK HEIs in 2005-06, an increase of 64% in just five years. There were a further 106,000 EU students in 2005-06. (HESA, 2006)
  • International students make up 13% of all HE students in the UK, third in proportion only to New Zealand and Australia. For those undertaking advanced research programmes, the figure is 40%, second only to Switzerland. The OECD averages are 6% and 16%, respectively. (OECD, 2006)
  • UK HEIs continue to attract new full-time undergraduates from abroad. The number of new international applicants for entry in 2007 was 68,500, an increase of 7.8% on the previous year. The number of EU applicants rose by 33%. (UCAS, 2007)
  • Students from China make up almost one-quarter of all international students in the UK. The fastest increase is from India: in 2007 there were more than 23,000 Indian students in the UK, a five-fold increase in less than a decade. (British Council, 2007)
  • The number of students in England participating in the Erasmus programme declined by 40% between 1995-96 and 2004-05 – from 9,500 to 5,500. Participation from other EU countries increased during this period. However, North American and Australian students have a lower mobility level than their UK counterparts. (CIHE, 2007).

Susan Robertson

Has higher education become a victim of its own propaganda?

eh.jpgEditor’s note: today’s guest entry was kindly written by Ellen Hazelkorn, Director, and Dean of the Faculty of Applied Arts, and Director, Higher Education Policy Research Unit (HEPRU), Dublin Institute of Technology, Ireland. She also works with the OECD’s Programme for Institutional Management of Higher Education (IMHE). Her entry should be read in conjunction with some of our recent entries on the linkages and tensions between the Bologna Process and the Lisbon Strategy, the role of foundations and endowments in facilitating innovative research yet also heightening resource inequities, as well as the ever present benchmarking and ranking debates.

~~~~~~~~~

councilpr.jpgThe recent Council of the European Union’s statement on the role of higher education is another in a long list of statements from the EU, national governments, the OECD, UNESCO, etc., proclaiming the importance of higher education (HE) to/for economic development. While HE has long yearned for the time in which it would head the policy agenda, and be rewarded with vast sums of public investment, it may not have realised that increased funding would be accompanied with calls for greater accountability and scrutiny, pressure for value-for-money, and organisational and governance reform. Many critics cite these developments as changing the fundamentals of higher education. Has higher education become the victim of its own propaganda?

At a recent conference in Brussels a representative from the EU reflected on this paradox. The Lisbon Strategy identified a future in which Europe would be a/the leader of the global knowledge economy. But when the statistics were reviewed, there was a wide gap between vision and reality. The Shanghai Academic Ranking of World Universities, which has become the gold standard of worldwide HE rankings, has identified too few European universities among the top 100. This was, he said, a serious problem and blow to the European strategy. Change is required, urgently.

sciencespo.jpgUniversity rankings are, whether we like it or not, beginning to influence the behaviour of higher education institutions and higher education policy because they arguably provide a snap-shot of competition within the global knowledge industrial sector (see E. Hazelkorn, Higher Education Management and Policy, 19:2, and forthcoming Higher Education Policy, 2008). Denmark and France have introduced new legislation to encourage mergers or the formation of ‘pôles’ to enhance critical mass and visibility, while Germany and the UK are using national research rankings or teaching/learning evaluations as a ‘market’ mechanism to effect change. Others, like Germany, Denmark and Ireland, are enforcing changes in institutional governance, replacing elected rectors with corporate CEO-type leadership. Performance funding is a feature everywhere. Even the European Research Council’s method of ‘empowering’ (funding) the researcher rather than the institution is likely to fuel institutional competition.

In response, universities and other HEIs are having to look more strategically at the way they conduct their business, organise their affairs, and the quality of their various ‘products’, e.g., educational programming and research. In return for increased autonomy, governments want more accountability; in return for more funding, governments want more income-generation; in return for greater support for research, governments want to identify ‘winners’; and in return for valuing HE’s contribution to society, governments want measurable outputs (see, for example, this call for an “ombudsman” for higher education in Ireland).

European governments are moving from an egalitarian approach – where all institutions are broadly equal in status and quality – to one in which excellence is promoted through elite institutions, differentiation is encouraged through competitive funding, public accountability is driven by performance measurements or institutional contacts, and student fees are a reflection of consumer buoyancy.

But neither the financial costs nor implications of this strategy – for both governments and institutions – have been thought through. The German government has invested €1.9b over five years in the Excellence Initiative but this sum pales into insignificance compared with claims that a single ‘world class’ university is a $1b – $1.5b annual operation, plus $500m with a medical school, or with other national investment strategies, e.g., China’s $20b ‘211 Project’ or Korea’s $1.2b ‘Brain 21’ programme, or with the fund-raising capabilities of US universities (‘Updates on Billion-Dollar Campaigns at 31 Universities’; ‘Foundations, endowments and higher education: Europe ruminates while the USA stratifies‘).

Given public and policy disdain for increased taxation, if European governments wish to compete in this environment, which policy objectives will be sacrificed? Is the rush to establish ‘world-class’ European universities hiding a growing gap between private and public, research and teaching, elite and mass education? Evidence from Ireland suggests that despite efforts to retain a ‘binary’ system, students are fleeing from less endowed, less prestigious institutes of technology in favour of ‘universities’. At one stage, the UK government promoted the idea of concentrating research activity in a few select institutions/centres until critics, notably the Lambert report and more recently the OECD, argued that regionality does matter.

Europeans are keen to establish a ‘world class’ HE system which can compete with the best US universities. But it is clear that such efforts are being undertaken without a full understanding of the implications, intended and unintended.

Ellen Hazelkorn

OECD’s science, technology and industry scoreboard 2007

oecd.jpgEvery two years the OECD publishes a Science, Technology and Industry Scoreboard. Yesterday it released its 2007 assessment of trends of the macroeconomic elements intended to stimulate innovation: knowledge, globalization, and their impacts on economic performance.

GlobalHigherEd has taken a look at the major findings of the report and highlights them below. These indicators of ‘innovation’ presumed to lead to ‘economic growth’ reveal a particular set of assumptions at work . For instance:

  • Investment in ‘knowledge’ (by which the OECD means software and education) has increased in most OECD countries.
  • Expenditure on R&D (as a % of GDP) in Japan (3.3%) and the EU (1.7%) picked up in 2005 following a drop in 2004. However, in the US expenditure in R&D declined slightly (to 2.6% in 2005 from 2.7% in 2001). China is the big feature story here, with spending on R&D growing even faster than its economy – by 18% per year over the period 2000-2005.
  • Countries like Switzerland, Belgium and English speaking countries (US, UK etc) have a large number of foreign doctoral students…with the US having the largest number. About 10,000 foreign citizens obtained a doctorate in S&E in the US in 2004/5 and represented 38% of S&E doctorates awarded.
  • Governments in OECD countries are putting into place policy levers to promote R&D – such as directing government funds to R&D through tax relief.
  • Universities are being encouraged to patent their innovations, and while the overall share of patents filed by universities has been relatively stable, this is increasing in selected OECD countries – France, Germany and Japan.
  • European companies (EU27) finance 6.4% of R&D performed by public institutions and universities compared to 2.7% in the US and 2% in Japan.
  • China now ranks 6th worldwide in their share of scientific publications and has raised its share of triadic patents from close to 0% in 1995 to 0.8% in 2005, though the US, Europe and Japan remain at the forefront. However, the US and the emerging economies (India, China, Israel, Singapore) focus upon high tech industries (computers, pharmaceuticals), whilst continental Europe focuses on medium technologies (automobiles, chemicals).
  • In all OECD countries inventive activities are more geographically concentrated – in an innovation cluster – as in Silicon Valley and Tokyo.
  • There has been a steady diffusion of ICT across all OECD countries – though take up if broadband in households varies, with Italy and Ireland showing only 10-15% penetration.
  • Across all OECD countries, use of the internet has become standard in businesses with over 10 employees.

These highlights from the Scoreboard reflects a number of things. First, it is a particular (and very narrow) way of looking at the basis for developing knowledge societies. Knowledge, as we can see above, is reduced to software and education to develop human capital.

Second, there is a particular way of framing science and technology and its relationship to development – as in larger levels of expenditure on R&D, rates of scientific publications, use of ICTs.

Third, it is assumed that the combination of inventions, patents and innovations will be the necessary boost to economic growth. However, this approach privileges intellectual property rights over and above other forms of invention and innovation which might contribute to the intellectual commons, as in open source software.

Finally, we should reflect on the purpose the Scoreboard. Not only is a country’s ‘progress’ (or ‘lack of’) then used by politicians and policymakers to argue for boosting investment and performance in particular areas of science and technology, as in recruiting more foreign students into graduate programs, or the development of incentives such as the promise of an EU Blue Card to ensure the brainpower stays in the country, but the Scorecard is a pedagogical tool. That is, a country ‘learns’ about itself in relation to other players in the global economy and is given a clear message about the overall direction it should head in if it wants to be a globally competitive knowledge-based economy.

Susan Robertson

EU Blue Cards: not a blank cheque for migrant labour – says Barroso

berlin1.jpgThe global competition for skilled labor looks like getting a new dimension – the EU is planning to issue “blue cards” to allow highly skilled non-Europeans to work in the EU. On Tuesday 23 October José Manuel Barroso, President of the European Commission, announced plans to harmonize admission procedures for highly qualified workers. As President Barroso put it:

With the EU Blue Card we send a clear signal: Highly skilled people from all over the world are welcome in the European Union. Let me be clear: I am not announcing today that we are opening the doors to 20 million high-skilled workers! The Blue Card is not a “blank cheque”. It is not a right to admission, but a demand-driven approach and a common European procedure.

The Blue Card will also mean increased mobility for high-skilled immigrants and their families inside the EU.

Member States will have broad flexibility to determine their labour market needs and decide on the number of high-skilled workers they would like to welcome.

With regard to developing countries we are very much aware of the need to avoid negative “brain drain” effects. Therefore, the proposal promotes ethical recruitment standards to limit – if not ban – active recruitment by Member States in developing countries in some sensitive sectors. It also contains measures to facilitate so-called “circular migration”. Europe stands ready to cooperate with developing countries in this area.

Further details are also available in this press release, with media and blog coverage available via these pre-programmed Google searches. As noted the proposed scheme would have a common single application procedure across the 27 Member States and a common set of rights for non-EU nationals including the right to stay for two years and move within the EU to another Member State for an extension of one more year.

The urgency of the introduction of the blue card is framed in terms of competition with the US/Canada/Australia – the US alone attracts more than half of all skilled labor while only 5 per cent currently comes to the EU. This explanation needs to be seen in relation to two issues which the GlobalHigherEd blog has been following: the competition to attract and retain researchers and the current overproduction of Maths, Science and Technology graduates. Can the attractiveness of the EU as a whole compete with the pull of R&D/Industrial capacity in the US and the logic of English as the global language? Related to this obviously is the recent enlargement to 27 Member States where there are ongoing issues around the mobility of labor within the EU? We will continue to look beneath the claims of policy initiatives to see the underlying contradictions in approaches. The ongoing question of the construction of a common European labor market and boosting the attractiveness of EU higher ed institutions may be at least as important here as the supposed skilled labor shortages.

Futurology demographics seem to be at the heart of the explanation of the need to intensify the recruitment of non-EU labour – according to the Commission the EU will have a shortage of 20 million workers in the next 20 years, with one third of the EU population over the age of 65. Interestingly though, there is no specification of the kinds of skill shortages that far down the line – the current concern is that the EU currently receives 85 % of global unskilled labour.

Barroso and the Commission continue to try to handle the contradictions of EU brain attractiveness strategies by the preferred model of:

  • fixed term contracts;
  • limitations on recruitment from developing countries in sensitive sectors; and,
  • the potentially highly tendentious notion of ‘circular migration’.

High skilled labour is effectively on a perpetual carousel of entry to and exit from the labour market with equal rights while in the EU which get lost at the point of departure from the EU zone only to reappear on re-entry, perhaps?

According to Reuters the successful applicants for a blue card would only need to be paid twice the minimum wage in the employing Member State – and this requirement would be lifted if the applicant were to be a graduate from an EU higher education institution. Two things are of interest here then – the blue card could be a way to retain anyone with a higher education qualification and there are implications for the continuing downward pressure on wage rates for the university educated. It will be interesting to see how this one plays out in relation to the attractiveness of EU universities if a blue card is the implied pay-off for successful graduation.

Peter D. Jones

Is the EU on target to meet the Lisbon objectives in education and training?

The European Commission (EC) has just released its annual 2007 Report Progress Towards the Lisbon Objectives in Education and Training: Indicators and Benchmarks. This 195 page document highlights the key messages about the main policy areas for the EC – from the rather controversial inclusion of schools (because of issues of subsidiarity) to what has become more standard fare for the EC – the vocational education and higher education sectors.

As we explain below, while the Report gives the thumbs up to the numbers of Maths, Science and Technology (MST) graduates, it gives the thumbs down to the quality of higher education. We, however, think that the benchmarks are far too simplistic and the conclusions drawn not sufficiently rigorous to support good policymaking. Let us explain.

The Report is the fourth in a series of annual assessments examining performance and progress toward the Education and Training 2010 Work Programme. These reports work as a disciplinary tool for Member States as well as contributing to making the EU more globally competitive.

To those of you unfamiliar with EC ‘speak’ – the EC’s Work Programme centers around the realization of 16 core indicators (agreed in May 2007 at the European Council and listed in the table below) and benchmarks (5) (also listed below) which emerged from the relaunch of the Lisbon Agenda in 2005.

lisbon-indicators.jpg

chart-1.jpg

Chapter 7 of this Report concentrates on progress toward modernizing higher education in Europe, though curiously enough there is no mention of the Bologna Process – the radical reorganization of the degree structure for European universities which has the US and Australia on the back-foot. Instead, three key areas are identified:

  • mathematics, science and technology graduates (MST)
  • mobility in higher education
  • quality of higher education institutions

With regard to MST, the EU is well on course to surpass the benchmark of an increase in the number of tertiary graduates in MST. However, the report notes that demographic trends (decreasing cohort size) will slow down growth in the long term.

chart-2.jpg

While laudable, GlobalHigherEd notes that it is not so much the number of graduates that are produced which is the problem. Rather, there are not enough attractive opportunities for researchers in Europe so that a significant percentage move to the US (14% of US graduates come from Europe). The long term attractiveness of Europe (see our recent entry) in terms of R&D is, therefore, still a major challenge.

With regard to mobility (see our earlier overview report), the EU has had an increase in the percentage of students with foreign citizenship. In 2004, every EU country, with the exception of Denmark, Estonia, Latvia, Lithuania, Hungary and Slovakia, recorded an increase in the % of students enrolled with foreign citizenship. Austria, Belgium, Germany, France, Cyprus and the UK have the highest proportions with foreign student populations of more than 10%.

Over the period 2000 to 2005 the number of students going to Europe from China increased by 500% (from 20,000 in 2000 to 107,000 in 2005; see our more detailed report on this), while numbers from India increased by 400%. While there is little doubt that the USA’s homeland security policy was a major factor, students also view the lower fees and moderate living costs in countries like France and Germany as particularly attractive. In the main:

  • the countries of origin of non-European students studying in the EU largely come from former colonies of the European member states
  • mobility is within the EU rather than from beyond the EU, with the exception of the UK. The UK is also a stand-out case because of the small number of its citizens who study in other EU countries.

Finally, concerning the quality of higher education, the Bologna Reforms are nowhere to be seen. Instead the EC report uses the Shanghai Jiao Tong Academic Ranking of World Universities (ARWI) and the World Universities Ranking (WUR) by the Times Higher Education Supplement to discuss the issue of quality. The Shanghai Jiao Tong uses Nobel Awards, and citations indexes (e.g. SCI; SSCI) – however, not only is a Nobel Award a limited (some say false) proxy for quality, but the citation indexes systematically discriminate in favor of US based institutions and journals. Only scientific output is included in each of these rankings; excluded are other kinds of outputs from universities which might have an impact, such as patents, or policy advice.

While each ranking system is intended to be a measure of quality – it is difficult to know what we might learn when one (Times Higher) will rank an institution (for example, the London School of Economics) in 11th position while the other (Shanghai) ranks the same institution in 200th position. Such vast differences could only be confusing for potential students if they were using them to make their choices about a high quality institution. However, perhaps this is not the main purpose, and that it serves a more important one – of ratcheting up both competition and discipline through comparison.

League tables are now also being developed in more nuanced ways. In 2007 the Shanghai ranking introduced one by ‘broad subject field’ (see below). What is particularly interesting here is that the EU-27 does relatively well in Engineering/Technology and Computer Sciences (ENG), Clinical Medicine and Pharmacy (MED) and Natural Sciences and Mathematics (SCI) in relation to the USA, compared with the Social Sciences (where the USA outflanks it by a considerable degree). Are Social Sciences in Europe this poor in terms of quality, and hence in serious trouble? GlobalHigherEd suggests that these differences are likely a reflection of the more internationalized/Anglocized publishing practices of the science, technology and medical fields, in comparison to the social sciences, who are committed in many cases to publishing in national languages.

lisbon-subject-areas.jpg

The somewhat dubious nature of these rankings as indicators of quality does not stop the EC using them to show that of the top 100 universities, 54 are located in the USA and only 29 in Europe. And again, the overall project of the EC is to set the agenda at the European scale for Member States by putting into place at the European level a set of instruments–including the recently launched European Research Council–intended to help retain MST graduates as well as recruit the brightest talent from around the globe (particularly China and India) and keep them in Europe.

However, the MST capacity of the EU outruns its industry’s ability to absorb and retain the graduates. It is clear the markets for students and brains are developing in different ways in different countries but with clear ‘types’ of markets and consumers emerging. The question is: what would an EU ranking system achieve as a technology of competitive market making?

Susan Robertson and Peter Jones

Graphic feed: “research footprints” of US “competitors” in science and technology

randfootprints.jpg

Source: Adams J. (2007) ‘Scientific wealth and the scientific investments of nations’, in T. Galama and J. Hosek (eds.) Perspectives on U.S. Competitiveness in Science and Technology, Santa Monica, CA: Rand Corporation, p. 40. [via the Scout Report]

Note 1: PUBERD = public R&D as a share of GDP.

Note 2: See a review of the report, and especially Adams’ chapter, in IntelliBriefs.

Competitive advantage and the mobile international student

Last week GlobalHigherEd featured a series of stories on the different players battling for market share in the global higher education market. We reviewed the recently published report by the Observatory of Borderless Higher Education (OBHE).

Today’s Inside Higher Ed also features a story from this report, drawing particular attention to the competitive advantages of the different players. These include whether students require a visa for short study visits, the cost of tuition (low or moderate), living costs (low or moderate), and whether there are programs available to foreign students to help them to prepare for study before they start classes – presumably language classes.

As we pointed out last week, France and Germany ‘scrub up’ well as possible study destinations – particularly for short periods. They have low tuition fees and moderate living costs, and their visa system would be present few problems for undergraduates wanting a short period of ‘study abroad’ . This might also be a useful tactic in luring back students to enrol in a graduate program, particularly if their experience is a positive one.

University of BristolBy comparison, the UK – a Major Player in the field like the US and Australia –scores only one tick in the competitive advantage box; that is, in their provision of programs to help students prepare for study. The downside for the prospective student is that the UK has a high living cost and high tuition fees. It does, however, have a relatively high brand image and ‘esteem’ value – something that Inside Higher Ed fails to point out.

As the market gets tighter, GlobalHigherEd agrees with Inside Higher Ed – that there are important strategic decisions to be made by institutions and countries if they want to not only stay competitive but increase market share. How might a nation go about making itself a desirable destination in this highly lucrative market? Alternatively, a country might currently be a desirable destination, but at present there are limited financial returns (aside from the not inconsequential returns through cost of living). The issue here for these low (or no) fee countries, such as Germany, France and Finland, is whether to respond to pressures to charge fees. Currently their figures of international students are multiplying rapidly – by more than 500% over the past five or so years. Will putting a fee structure into place for international students simply turn the tap off? This dilemma is likely to cause university administrators  more than a minor headache.

Susan Robertson