OECD launches first global assessment of higher education learning outcomes

Editor’s note: the slideshow below about the Assessment of Higher Education Learning Outcomes (AHELO) initiative, and the associated press release, were kindly provided to GlobalHigherEd by Richard Yelland, Head of the Education Management and Infrastructure Division (Directorate for Education), OECD. Coverage of the AHELO launch yesterday, at the Council for Higher Education Accreditation’s 2010 Annual Conference (January 25-28, Washington, D.C.), was evident in today’s Chronicle of Higher Education (‘OECD Project Seeks International Measures for Assessing Educational Quality‘), Inside Higher Ed (‘Measuring Student Learning, Globally‘) and Lloyd Armstrong’s weblog Changing Higher Education.

Today’s guest entry (via slideshow) in GlobalHigherEd is designed to shed light on the nature of AHELO, an initiative that reflects the OECD’s ‘collective learning machinery’ role; a role that numerous stakeholders (e.g., state and provincial governments; non-profit foundations, ministries) link into in a myriad of ways. AHELO is emerging at an historical moment when the clamoring for a better understanding of learning outcomes, and associated processes of quality assurance, is evident around the world. In this context it is important to understand what AHELO is, as perceived by the OECD itself, but also why select agencies and institutions (e.g., the US-based ones noted in the press release) value the OECD’s work.

~~~~~~~~~~~~~

~~~~~~~~~~~~~

OECD launches first global assessment of higher education learning outcomes

1/27/2010

The OECD today announced the launch of the Assessment of Higher Education Learning Outcomes (AHELO) initiative. The AHELO generic assessment component will look at skills such as problem solving and critical thinking. A US$1.2 million contract has been awarded to the Council for Aid to Education based in New York City to develop an international version of the Collegiate Learning Assessment (CLA).

Speaking at the Council for Higher Education Accreditation conference in Washington, DC, Richard Yelland, who is leading the OECD’s AHELO initiative said: “AHELO is a pioneering international attempt to assess the quality of higher education by focussing on what students have learned during their studies and what skills they have acquired. Success will provide higher education systems and institutions with diagnostic tools for improvement that go far beyond anything currently available”.

This ground-breaking project aims to demonstrate that reliable and useful comparisons of learning outcomes can be made on a global scale and will point the way for future improvements.

Welcoming this announcement, US Under-Secretary for Education, Martha Kanter, said: “We appreciate OECD’s leadership to assess student performance on an international scale. The AHELO initiative provides the US with an exciting opportunity to collaborate with other countries to assess higher education learning outcomes in our global society.”

Council for Aid to Education  (CAE)  President Roger Benjamin commented: “Because of its success in important international assessments, the OECD is the right venue for creating AHELO and its generic strand which will focus on the skills thought to be critical for human capital development and citizenship in the 21st century. We are pleased that the CLA has been chosen for this purpose.

Funding for this work comes from participating countries and from the Lumina Foundation for Education which has made a USD750 000 grant to the OECD.

“With Lumina’s investments focused heavily on increasing the number and quality of postsecondary degrees and  credentials, the work of AHELO is essential and will help to ensure that these credentials are learning outcome-based and relevant in the United States as well as internationally,” said Jamie P. Merisotis, president and chief executive officer of Lumina Foundation.

Other components of AHELO will measure student knowledge in two discipline areas – economics and engineering. Contextual analysis will factor in student background and national differences. In time a value-added strand will look at learning gains over time.

Higher education is an increasingly strategic investment for countries and for individuals. It is estimated that some 135 million students study worldwide in more than 17 000 universities and other institutions of post-secondary education.

At least thirteen culturally diverse countries across the globe are joining the US as participants in this groundbreaking project, including Finland, Italy, Mexico, Japan, and Kuwait. AHELO will test a sample of students in a cross-section of institutions in each country. Institutions in four states (Connecticut, Massachusetts, Missouri, and Pennsylvania) will be working together, and with the State Higher Education Executive Officers (SHEEO) association to participate on behalf of the United States.

SHEEO President Paul Lingenfelter said: “This is a real opportunity for institutions in the four states to engage in improving knowledge and practice with respect to college learning outcomes. U.S participation is essential, and we will all benefit from their efforts.”

For information, journalists are invited to contact: Susan Fridy(202) 822-3869 at the OECD Washington Center, or Angela Howard at OECD in Paris +33 1 45 24 80 99. For more information on the AHELO, go to: www.oecd.org/edu/ahelo.

‘Tuning USA’: reforming higher education in the US, Europe style

Many of us are likely to be familiar with the film An American in Paris (1951), at least by name. Somehow the romantic encounters of an ex-GI turned struggling American painter, with an heiress  in one of Europe’s most famous cities — Paris, seems like the way things should be. lumina-13

So when the US-based Lumina Foundation announced it was launching Europe’s ‘Tuning Approach within the Bologna Process’ as an educational experiment in three American States (Utah, Indiana and Minnesota) to  “…assure rigor and relevance for college degrees at various levels” (see Inside Higher Ed, April 8th, 2009),  familiar  refrains and trains of thought are suddenly shot into reverse gear. A European in America? Tuning USA, Europe style?

For Bologna watchers, Tuning is no new initiative. According to its website profile, Tuning started in 2000 as a project:

…to link the political objectives of the Bologna Process and at a later stage the Lisbon Strategy to the higher education sector. Over time Tuning has developed into a Process: an approach to (re-)design, develop, implement, evaluate and enhance quality in first, second and third cycle degree programmes.

Given that the Bologna Process entails the convergence of 46 higher education systems across Europe and beyond (those countries who are also signatories to the Process but how operate outside its borders), the question of how comparability can be assured of curricula in terms of structures, programmes and actual teaching, was clearly a pressing issue.

Funded under the European Commission’s Erasmus Thematic Network scheme, Tuning Educational Structures in Europe emerged as a project that might address this challenge.  tuning-31

However, rather like the Bologna Process, Tuning has had a remarkable career. Its roll-out across Europe, and take up in countries as far afield as Latin America and the Caribbean (LAC) has been nothing short of astonishing.

Currently 18 Latin American and Caribbean countries (181 LAC universities) are involved in Tuning Latin America across twelve subject groups (Architecture, Business,  Civil Engineering, Education, Geology, History, Law, Mathematics, Medicine, Nursing and Physics).  The Bologna  and Tuning Processes, it would seem, are  considered a key tool for generating change across Latin America.

Similar processes are under way in Central Asia, the Mediterranean region and Africa. And while the Bologna promoters tend to emphasise the cultural and cooperation orientation of Tuning and Bologna, both are self-evidently strategies to reposition European higher education geostrategically. It is a market making  strategy as well as increasingly a model for how to restructure higher education systems to produce greater resource efficiencies, and some might add, greater equity.

tuning-21

Similarly, the Tuning Process is regarded as a means for realizing one of the ‘big goals’ that  Lumina Foundation President–Jamie Merisotis–had set for the Foundation soon after taking over the helm; to increase the proportion of the US population with degrees to 60% by 2025 so as to ensure the global competitiveness of the US.

According to the Chronicle of Higher Education (May 1st, 2009), Merisotis “gained the ear of the White House”  during the transition days of the Obama administration in 2008 when he urged Obama “to make human capital a cornerstone of US economic policy”.

Merisotis was also one of the experts consulted by the US Department of Education when it sought to determine the goals for education, and the measures of progress toward those goals.

By February 2009, President Obama had announced to Congress he wanted America to attain the world’s highest proportion of graduates by 2020.  So while the ‘big goal’ had now been set, the question was how?

One of the Lumina Foundation’s response was to initiate Tuning USA.  According to the Chronicle, Lumina has been willing to draw on ideas that are generated by the education policy community in the US, and internationally.

Clifford Adelman is one of those. A  senior associate at the Institute for Higher Education Policy in Washington, Adelman was contracted by the Lumina Foundation to produce a very extensive report on Europe’s higher education restructuring. The report (The Bologna Process for U.S. Eyes: Re-learning Higher Education in the Age of Convergence) was released early this April, and was profiled by Anne Corbett in GlobalHigherEd. In the report Adelman sets out to redress what he regards as the omissions from the Spellings Commission review of higher education.  As Adelman (2009: viii)  notes:

The core features of the Bologna Process have sufficient momentum to become the dominant global higher education model within the next two decades. Former Secretary of Education, Margaret Spellings’ Commission on the Future of Higher Education paid no attention whatsoever to Bologna, and neither did the U.S. higher education community in its underwhelming response to that Commission’s report. Such purblind stances are unforgivable in a world without borders.

But since the first version of this monograph, a shorter essay entitled The Bologna Club: What U.S. Higher Education Can Learn from a Decade of European Reconstruction (Institute for Higher Education Policy, May 2008), U.S. higher education has started listening seriously to the core messages of the remarkable and difficult undertaking in which our European colleagues have engaged. Dozens of conferences have included panels, presentations, and intense discussions of Bologna approaches to accountability, access, quality assurance, credits and transfer, and, most notably, learning outcomes in the context of the disciplines. In that latter regard, in fact, three state higher education systems—Indiana, Minnesota, and Utah—have established study groups to examine the Bologna “Tuning” process to determine the forms and extent of its potential in U.S. contexts. Scarcely a year ago, such an effort would have been unthinkable.

Working with students, faculty members and education officials from Indiana, Minnesota and Utah, Lumina has now initiated Tuning USA as a year-long project:

The aim is to create a shared understanding among higher education’s stakeholders of the subject-specific knowledge and transferable skills that students in six fields must demonstrate upon completion of a degree program. Each state has elected to draft learning outcomes and map the relations between these outcomes and graduates’ employment options for at least two of the following disciplines: biology, chemistry, education, history, physics and graphic design (see report in InsideIndianabusiness).

The world has changed. The borders between the US and European higher education are now somewhat leaky, for strategic purposes, to be sure.

A European in America is now somehow thinkable!

Susan Robertson

Ranking – in a different (CHE) way?

uwe_brandenburg_2006-005nl GlobalHigherEd has been profiling a series of entries on university rankings as an emerging industry and technology of governance. This entry has been kindly prepared for us by Uwe Brandenburg. Since 2006 Uwe has been project manager at the Centre for Higher Education Development (CHE) and CHE Consult, a think tank and consultancy focusing on higher education reform.  Uwe has an MA in Islamic Studies, Politics and Spanish from the University of Münster (Germany),  and an MscEcon in Politics from the University of Wales at Swansea.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Talking about rankings usually means talking about league tables. Values are calculated based on weighed indicators which are then turned into a figure, added and formed into an overall value, often with the index of 100 for the best institution counting down. Moreover, in many cases entire universities are compared and the scope of indicators is somewhat limited. We at the Centre for Higher Education Development (CHE) are highly sceptical about this approach. For more than 10 years we have been running our own ranking system which is so different to the point that  some experts  have argued that it might not be a ranking at all which is actually not true. Just because the Toyota Prius is using a very different technology to produce energy does not exclude it from the species of automobiles. What are then the differences?

uwe1

Firstly, we do not believe in the ranking of entire HEIs. This is mainly due to the fact that such a ranking necessarily blurs the differences within an institution. For us, the target group has to be the starting point of any ranking exercise. Thus, one can fairly argue that it does not help a student looking for a physics department to learn that university A is average when in fact the physics department is outstanding, the sociology appalling and the rest is mediocre. It is the old problem of the man with his head in the fire and the feet in the freezer. A doctor would diagnose that the man is in a serious condition while a statistician might claim that over all he is doing fine.

So instead we always rank on the subject level. And given the results of the first ExcellenceRanking which focused on natural sciences and mathematics in European universities with a clear target group of prospective Master and PhD students, we think that this proves the point;  only 4 institutions excelled in all four subjects; another four in three; while most excelled in only one subject. And this was in a quite closely related field.

uwe2

Secondly, we do not create values by weighing indicators and then calculating an overall value. Why is that? The main reason is that any weight is necessarily arbitrary, or in other words political. The person weighing decides which weight to give. By doing so, you pre-decide the outcome of any ranking. You make it even worse when you then add the different values together and create one overall value because this blurs differences between individual indicators.

Say a discipline is publishing a lot but nobody reads it. If you give publications a weight of 2 and citations a weight of one, it will look like the department is very strong. If you do it the other way, it will look pretty weak. If you add the values you make it even worse because you blur the difference between both performances. And those two indicators are even rather closely related. If you summarize results from research indicators with reputation indicators, you make things entirely irrelevant.

Instead, we let the indicator results stand for their own and let the user decide what is important for his or her personal decision-making process. e.g., in the classical ranking we allow the users to create “my ranking” so they can choose the indicators they want to look at and in which order.

Thirdly, we strongly object to the idea of league tables. If the values which create the table are technically arbitrary (because of the weighing and the accumulation), the league table positions create the even worse illusion of distinctive and decisive differences between places. They then bring alive the impression of an existing difference in quality (no time or space here to argue the tricky issue of what quality might be) which is measurable to the percentage point. In other words, that there is a qualitative and objectively recognizable measurable difference between place number 12 and 15. Which is normally not the case.

Moreover, small mathematical differences can create huge differences in league table positions. Take the THES QS: even in the subject cluster SocSci you find a mere difference of 4.3 points on a 100 point scale between league rank 33 and 43. In the overall university rankings, it is a meager 6.7 points difference between rank 21 and 41 going down to a slim 15.3 points difference between rank 100 and 200. That is to say, the league table positions of HEIs might differ by much less than a single point or less than 1% (of an arbitrarily set figure). Thus, it tells us much less than the league position suggests.

Our approach, therefore, is to create groups (top, middle, bottom) which are referring to the performance of each HEI relative to the other HEIs.

uwe3

This means our rankings are not as easily read as the others. However,  we strongly believe in the cleverness of the users. Moreover, we try to communicate at every possible level that every ranking (and therefore also ours) is based on indicators which are chosen by the ranking institution. Consequently, the results of the respective ranking can tell you something about how an HEI performs in the framework of what the ranker thinks interesting, necessary, relevant, etc. Rankings therefore NEVER tell you who is the best but maybe (depending on the methodology) who is performing best (or in our cases better than average) in aspects considered relevant by the ranker.

A small, but highly relevant aspect might be added here. Rankings (in the HE system as well as in other areas of life) might suggest that a result in an indicator proves that an institution is performing well in the area measured by the indicator. Well it does not. All an indicator does is hint at the fact that given the data is robust and relevant, the results give some idea of how close the gap is between the performance of the institution and the best possible result (if such a benchmark exists). The important word is “hint” because “indicare” – from which the word “indicator” derives – means exactly this: a hint, not a proof. And in the case of many quantitative indicators, the “best” or “better” is again a political decision if the indicator stands alone (e.g. are more international students better? Are more exchange agreements better?).

This is why we argue that rankings have a useful function in terms of creating transparency if they are properly used, i.e. if the users are aware of the limitations, the purpose, the target groups and the agenda of the ranking organization and if the ranking is understood as one instrument among various others fit to make whatever decision related to an HEI (study, cooperation, funding, etc.).

Finally, modesty is maybe what a ranker should have in abundance. Running the excellence ranking in three different phases (initial in 2007, second phase with new subjects right now, repetition of natural sciences just starting) I am aware of certainly one thing. However strongly we aim at being sound and coherent, and however intensely we re-evaluate our efforts, there is always the chance of missing something; of not picking an excellent institution. For the world of ranking, Einstein’s conclusion holds a lot of truth:

Not everything that can be counted, counts and not everything that counts can be counted.

For further aspects see:
http://www.che-ranking.de/cms/?getObject=47&getLang=de
http://www.che-ranking.de/cms/?getObject=44&getLang=de
Federkeil, Gero, Rankings and Quality Assurance in Higher Education, in: Higher Education in Europe, 33, (2008), S. 209-218
Federkeil, Gero, Ranking Higher Education Institutions – A European Perspective., in: Evaluation in Higher Education, 2, (2008), S. 35 – 52
Other researchers specialising in this (and often referring to our method) are e.g. Alex Usher, Marijk van der Wende or Simon Marginson.

Uwe Brandenburg

University institutional performance: HEFCE, UK universities and the media

deem11 This entry has been kindly prepared by Rosemary Deem, Professor of Sociology of Education, University of Bristol, UK. Rosemary’s expertise and research interests are in the area of higher education, managerialism, governance, globalization, and organizational cultures (student and staff).

Prior to her appointment at Bristol, Rosemary was Dean of Social Sciences at the University of Lancaster. Rosemary has served as a member of ESRC Grants Board 1999-2003, and Panel Member of the Education Research Assessment Exercise 1996, 2001, 2008.

GlobalHigherEd invited Rosemary to respond to one of the themes (understanding institutional performance) in the UK’s Higher Education Debate aired by the Department for Innovation, Universities and Skills  (DIUS) over 2008.

~~~~~~~~~~~~~~

Institutional performance of universities and their academic staff and students is a very topical issue in many countries, for potential students and their families and sponsors, governments and businesses. As well as numerous national rankings, two annual international league tables in particular, the Shanghai Jiao Tong,  developed for the Chinese government to benchmark its own universities and the commercial Times Higher top international universities listings, are the focus of much government and institutional  interest,  as  universities vie with each other to appear in the top rankings of so-called world-class universities, even though the quest for world-class status has negative as well as positive consequences for national higher education systems (see here).

International league tables often build on metrics that are themselves international (e.g publication citation indexes) or use proxies for quality such as the proportions of international students or staff/student ratios, whereas national league tables tend to develop their own criteria, as the UK Research Assessment Exercise (RAE) has done and as its planned replacement, the Research Excellence Framework is intended to do. deem2

In March 2008, John Denham, Secretary of State for (the Department of) Innovation, Universities and Skills (or DIUS) commissioned the Higher Education Funding Council for England (HEFCE) to give some advice on measuring institutional performance. Other themes  on which the Minister commissioned advice, and which will be reviewed on GlobalHigherEd over the next few months, were On-Line Higher Education Learning, Intellectual Property and research benefits; Demographic challenge facing higher education; Research Careers; Teaching and the Student Experience; Part-time studies and Higher Education; Academia and public policy making; and International issues in Higher Education.

Denham identified five policy areas for the report on ‘measuring institutional performance’ that is the concern of this entry, namely: research, enabling business to innovate and engagement in knowledge transfer activity, high quality teaching, improving work force skills and widening participation.

This list could be seen as a predictable one since it relates to current UK government policies on universities and strongly emphasizes the role of higher education in producing employable graduates and relating its research and teaching to business and the ‘knowledge economy’.

Additionally, HEFCE already has quality and success measures and also surveys, such as the National Student Survey of all final year undergraduates for everything except workforce development.  The five areas are a powerful indicator of what government thinks the purposes of universities are, which is part of a much wider debate (see here and here).

On the other hand, the list is interesting for what it leaves out – higher education institutions and their local communities (which is not just about servicing business), or universities’ provision for supporting the learning of their own staff (since they are major employers in their localities) or the relationship between teaching and research

The report makes clear that HEFCE wants to “add value whilst minimising the unintended consequences”, (p. 2), would like to introduce a code of practice for the use of performance measures and does not want to introduce more official league tables in the five policy areas.  There is also a discussion about why performance is measured: it may be for funding purposes, to evaluate new policies, inform universities so they can make decisions about their strategic direction, improve performance or to inform the operation of markets. The disadvantages of performance measures, the tendency for some measures to be proxies (which will be a significant issue if plans to use metrics and bibliometrics  as proxies for research quality in  the new Research Excellence Framework are adopted) and the tendency to measure activity and volume but not impact are also considered in the report.

However, what is not emphasized enough are that the consequences once a performance measure is made public are not within anyone’s control.  Both the internet and the media ensure that this is a significant challenge.  It is no good saying that “Newspaper league tables do not provide an accurate picture of the higher education sector” (p 7) but then taking action which invalidates this point.

Thus in the RAE 2008, detailed cross-institutional results were made available by HEFCE to the media before they are available to the universities themselves last week, just so that newspaper league tables can be constructed.

Now isn’t this an example of the tail wagging the dog, and being helped by HEFCE to do so? Furthermore, market and policy incentives may conflict with each other.  If an institution’s student market is led by middle-class students with excellent exam grades, then urging them to engage in widening participation can fall on deaf ears.   Also, whilst UK universities are still in receipt of significant public funding, many also generate substantial private funding too and some institutional heads are increasingly irritated by tight government controls over what they do and how they do it.

Two other significant issues are considered in the report. One is value-added measures, which HEFCE feels it is not yet ready to pronounce on.  Constructing these for schools has been controversial and the question of over what period should value added measures be collected is problematic, since HEFCE measures would look only at what is added to recent graduates, not what happens to them over the life course as a whole.

The other issue is about whether understanding and measuring different dimensions of institutional performance could help to support diversity in the sector.  It is not clear how this would work for the following three reasons:

  1. Institutions will tend to do what they think is valued and has money attached, so if the quality of research is more highly valued and better funded than quality of teaching, then every institution will want to do research.
  2. University missions and ‘brands’ are driven by a whole multitude of factors and importantly by articulating the values and visions of staff and students and possibly very little by ‘performance’ measures; they are often appealing to an international as well as a national audience and perfect markets with detailed reliable consumer knowledge do not exist in higher education.
  3. As the HEFCE report points out, there is a complex relationship between research, knowledge transfer, teaching, CPD and workforce development in terms of economic impact (and surely social and cultural impact too?). Given that this is the case, it is not evident that encouraging HEIs to focus on only one or two policy areas would be helpful.

There is a suggestion in the report that web-based spidergrams based on an seemingly agreed (set of performance indicators might be developed which would allow users to drill down into more detail if they wished). Whilst this might well be useful, it will not replace or address the media’s current dominance in compiling league tables based on a whole variety of official and unofficial performance measures and proxies. Nor will it really address the ways in which the “high value of the UK higher education ‘brand’ nationally and internationally” is sustained.

Internationally, the web and word of mouth are more critical than what now look like rather old-fashioned performance measures and indicators.  In addition, the economic downturn and the state of the UK’s economy and sterling are likely to be far more influential in this than anything HEFCE does about institutional performance.

The report, whilst making some important points, is essentially introspective, fails to sufficiently grasp how some of its own measures and activities are distorted by the media, does not really engage with the kinds of new technologies students and potential students are now using (mobile devices, blogs, wikis, social networking sites, etc) and focuses far more on national understandings of institutional performance than on how to improve the global impact and understanding of UK higher education.

Rosemary Deem

Changing higher education and the claimed educational paradigm shift – sobering up educational optimism with some sociological scepticism

If there is a consensus on the recognition that higher education governance and organization are being transformed, the same does not occur with regard to the impact of that transformation on the ‘educational’ dimension of higher education.

Under the traveling influence of the diverse versions of New Public Management (NPM), European public sectors are being molded by market-like and client-driven perspectives. Continental higher education is no exception. Austria and Portugal, to mention only these two countries, have recently re-organized their higher education system explicitly under this perspective. The basic assumptions are that the more autonomous institutions are, the more responsive they are to changes in their organizational environment, and that academic collegial governance must be replaced by managerial expertise.

Simultaneously, the EU is enforcing discourses and developing policies based on the competitive advantages of a ‘Europe of knowledge’. ‘Knowledge societies’ appear as depending on the production of new knowledge, its transmission through education and training, its dissemination through ICT, and on its use through new industrial processes and services.

By means of ‘soft instruments’ [such as the European Qualification Framework (EQF) and the Tuning I and II projects (see here and here), the EU is inducing an educational turn or, as some argue, an emergent educational paradigm. The educational concepts of ‘learning’, ‘knowledge’, ‘skills’, ‘competences’, ‘learning outcomes’ and ‘qualifications’, re-emerge in the framework of the EHEA this time as core educational perspectives.

From the analysis of the documents of the European Commission and its diverse agencies and bodies, one can see that a central educational role is now attributed to the concept of ‘learning outcomes’ and to the ‘competences’ students are supposed to possess in the end of the learning process.

In this respect, the EQF is central to advancing the envisaged educational change. It claims to provide common reference levels on how to describe learning, from basic skills up to the PhD level. The 2007 European Parliament recommendation defines “competence” as the proven ability to use knowledge, skills and personal, social and/or methodological abilities, in work or study situations and in professional and personal development”.

The shift from ‘knowledge content’ as the organizer of learning to ‘competences’, with a focus on the capacity to use knowledge(s) to know and to act technically, socially and morally, moves the role of knowledge from one where it is a formative process based on ‘traditional’ approaches to subjects and mastery of content, to one where the primary interest is in the learner achieving as an outcome of the learning process. In this new model, knowledge content is mediated by competences and translated into learning outcomes, linking together ‘understanding’, ‘skills’ and ‘abilities’.

However, the issue of knowledge content is passed over and left aside, as if the educational goal of competence building can be assigned without discussion about the need to develop procedural competencies based more on content rather than on ‘learning styles’. Indeed it can be argued that the knowledge content carried out in the process of competence building is somehow neutralized in its educational role.

In higher education, “where learning outcomes are considered as essential elements of ongoing reforms” (CEC: 8), there are not many data sources available on the educational impact of the implementation of competence-based perspectives in higher education. And while it is too early to draw conclusions about the real impact on higher education students’ experiences of the so called ‘paradigm shift’ in higher education brought by the implementation of the competence-based educational approach, the analysis of the educational concepts is, nonetheless, an interesting starting point.

The founding educational idea of Western higher education was based on the transforming potential of knowledge both at the individual and social level. Educational categories (teaching, learning, students, professors, classes, etc.) were grounded in the formative role attributed to knowledge, and so were the curriculum and the teaching and learning processes. Reconfiguring the educational role of knowledge from its once formative role in mobilizing the potential to act socially (in particular in the world of work), induces important changes in educational categories.

As higher education institutions are held to be sensitive and responsive to social and economic change, the need to design ‘learning outcomes’ on the ‘basis of internal and external stakeholders’ perceptions (as we see with Tuning: 1) grows in proportion. The ‘student’ appears simultaneously as an internal stakeholder, a client of educational services, a person moving from education to labor market and a ‘learner’ of competences. The professor, rather than vanishing, is being reinvented as a provider of learning opportunities. Illuminated by the new educational paradigm and pushed by the diktat of efficiency in a context of mass higher education, he/she is no more the ‘center’ of knowledge flux and delivery but the provider of learning opportunities for ‘learners’. Moreover, as an academic, he/she is giving up his/her ultimate responsibility to exercise quality judgments on teaching-learning processes in favor of managerial expertise on that.

As ‘learning outcomes’ are what a learner is expected to know, understand and/or be able to demonstrate on completion of learning, and given these can be represented by indicators, assessment of the educational process can move from inside to outside higher education institutions to assessment by evaluation technicians. With regard to the lecture theater as the educational locus par excellence, ICT instruments and ideographs de-localize classes to the ether of ‘www’, ‘face-to-face’ teaching-learning being a minor proportion of the ‘learner’ activities. E-learning is not the ‘death’ of the professor but his/her metamorphosis into a ‘learning monitor’. Additionally, the rise of virtual campuses introduce a new kind of academic life whose educational consequences are still to be identified.

The learner-centered model that is emerging has the educational potential foreseen by many educationalists (e.g. John Dewey, Paulo Freire, Ivan Illich, among others) to deal with the needs of post-industrial societies and with new forms of citizenship. The emerging educational paradigm promises a lot: the empowerment of the student, the enhancement of his/her capacity and responsibility to express his/her difference, the enhancement of team work, the mutual help, learning by doing, etc.

One might underline the emancipatory potential that this perspective assumes – and some educationalists are quite optimist about it. However, education does not occur in a social vacuum, as some sociologists rightly point out. In a context where HEIs are increasingly assuming the features of ‘complete organizations’ and where knowledge is indicated as the major competitive factor in the world-wide economy, educational optimism should/must be sobered up with some sociological scepticism.

In fact the risk is that knowledge, by evolving away from a central ‘formative’ input to a series of competencies, may simply pass – like money – through the individuals without transforming them (see the work of Basil Bernstein for an elaboration of this idea). By easing the frontiers between the academic and work competencies, and between education and training, higher education runs the risk of sacrificing too much to the gods of relevance, to (short term) labor market needs. Contemporary labor markets require competencies that are supposed to be easily recognized by the employers and with the potential of being continuously reformed. The educational risk is that of reducing the formation of the ‘critical self’ of the student to the ‘corporate self’ of the learner.

António M. Magalhães

The ‘European Quality Assurance Register’ for higher education: from networks to hierarchy?

Quality assurance has been an important global dialogue, with quality assurance agencies embedded in the fabric of the global higher education landscape. These agencies are mostly made up of a network of nationally-located institutions, for example the Nordic Quality Assurance Network in Higher Education, or the US-based Council for Higher Education Accrediation.

Since the early 1990s, we have seen the development of regional and global networks of agencies, for instance the European Association for Quality Assurance in Higher Education, and the International Network for Quality Assurance Agencies in Higher Education which in 2007 boasted full membership from 136 organizations from 74 countries. Such networks both drive and produce processes of globalization and regionalization.

eqr-3.jpg

The emergence of ‘registers’–of the kind announced today with the launch of the European Quality Assurance Register (EQAR) by the E4 Group(ESU, The European University Association – EUA, The European Association of Institutions in Higher Education – EURASHE, The European Network of Quality Assurance Agencies – ENQA) – signals a rather different kind of ‘globalising’ development in the sector. In short we might see it as a move from a network of agencies to a register that acts to regulate the sector. It also signals a further development in the creation of a European higher education industry.

So, what will the EQAR to do? According to EQAR, its role is to

…provide clear and reliable information on the quality assurance agencies (QAAs) operating in Europe: this is a list of agencies that substantially comply with the European Standards and Guidelines for Quality Assurance (ESG) as adopted by the European ministers of higher education in Bergen 2005.

The Register is expected to:

  • promote student mobility by providing a basis for the increase of trust among higher education institutions;
  • reduce opportunities for “accreditation mills” to gain credibility;
  • provide a basis for governments to authorize higher educations institutions to choose any agency from the Register, if that is compatible with national arrangements;
  • provide a means for higher education institutions to choose between different agencies, if that is compatible with national arrangements; and
  • serve as an instrument to improve the quality of education.

eqr-2.jpg

All Quality Assurance Agencies that comply with the European Standards and Guidelines for Quality Assurance will feature on the register, with compliance secured through an external review process.

There will also be a Register Committee – an independent body comprising of 11 quality assurance experts, nominated by European stakeholder organisations. This committee will decide on the inclusion of the quality assurance agencies. The EQAR association, that operates the Register, will be managed by an Executive Board, composed of E4 representatives, and a Secretariat.

The ‘register’ not only formalises and institutionalises a new layer of quality assurance, but it generates a regulatory hierarchy over and above other public and private regulatory agencies. It also is intended to ensure the development of a European higher education industry with the stamp of regulatory approval to provide important information in the global marketplace.

Susan Robertson

Benchmarking ‘the international student experience’

GlobalHigherEd has carried quite a few entries on benchmarking practices in the higher education sector over the past few month – the ‘world class’ university, the OECD innovation scoreboards, the World Bank’s Knowledge Assessment Methodology, Programme of International Student Assessment, and so on.

University World News this week have just reported on an interesting new development in international benchmarking practices – at least for the UK – suggesting, too, that the benchmarking machinery/industry is itself big business and likely to grow.

According to the University World News, the International Graduate Insight Group (or i-graduate) last week unveiled a study in the UK to:

…compare the expectations and actual experiences of both British and foreign students at all levels of higher education across the country. The Welsh Student Barometer will gather the opinions of up to 60,000 students across 10 Welsh universities and colleges. i-graduate will benchmark the results of the survey so that each university can see how its ability to match student expectations with other groupings of institutions, not only in Wales but also the rest of the world.

i-graduate markets itself as:

an independent benchmarking and research service, delivering comparative insights for the education sector worldwide: your finger on the pulse of student and stakeholder opinion.

We deliver an advanced range of dedicated market research and consultancy services for the education sector. The i-graduate network brings international insight, risk assessment and reassurance across strategy and planning, recruitment, delivery and relationship management.

i-graduate.jpg i-graduate have clearly been busy amassing information on ‘the international student experience’. It has collected responses from more than 100,000 students from over 90 countries by its International Student Barometer (ISB)- which they describe as the first truly global benchmark of the student experience. This information is packaged up (for a price) in multiple ways for different audiences, including leading UK universities. According to -i-graduate, the ISB is:

a risk management tool, enabling you to track expectations against the experiences of international students. The ISB isolates the key drivers of international student satisfaction and establishes the relative importance of each – as seen through the eyes of your students. The insight will tell you how expectations and experience affect their loyalty, their likelihood to endorse and the extent to which they would actively encourage or deter others.

Indexes like this, either providing information about one’s location in the hierarchy or as strategic information on brand loyalty, acts as a kind of disciplining and directing practice.

Those firms producing these indexes and barometers, like i-graduate, are also in reality packaging particular kinds of ‘knowledge’ about the sector and selling in the sector. In a recent seminar ESRC-funded seminar series on Changing Cultures of Competitiveness, Dr. Ngai-Ling Sum described these firms as brokering a ‘knowledge brand’ – a trade-marked, for a price, bundle of strategies/tools and insights intended to alter an individual’s, institution’s or nation’s practices, in turn leading to greater competitiveness – a phenomenon she tags to practices that are involved in producing the Knowledge-Based Economy (KBE).

It will be interesting to look more closely at, and report in a future blog on, what the barometer is measuring. For it is the specific socio-economic and political content of these indexes and barometers, as well as the disciplining and directing practices involved, which are important for understanding the direction of global higher education.

Susan Robertson

Battling for market share 4: China as an ‘Emerging Contender’ for internationally mobile students

This week GlobalHigherEd has been running a series of in-depth reports on the battle for market share of higher education. Our reports draw from a major study released last week by the Observatory of Borderless Higher Education (OBHE) on International Student Mobility: Patterns and Trends. The Observatory report identifies four categories: (1) Major Players; (2) Middle Powers; (3) Evolving Destinations and (4) Emerging Contenders.

Today we look at the fourth category – ‘Emerging Contenders’.

China, Singapore and Malaysia are viewed as emerging contenders each having 7%, 2% and 2% respectively of global market share. Between them, these 3 have 12% (around 250,000-300,000 students) of global market share (compared with 45% for the Major Players, 20% for the Middle Powers, and 13% for the Evolving Destinations). The reason for the OBHE report locating these 3 countries into this category is that:

  • they have all taken active measures to recruit overseas students
  • they have all increased their competitiveness over the past couple of years
  • all three have allocated resources to become ‘world class’ institutions over the next decade
  • changing mobility patterns suggest that they are having some success in getting some market share
  • all three are using more English as the language of instruction, making them more attractive to students
  • all have relatively low fees, and hence are a potentially attractive alternative to the US, UK and Australia

shanghai.jpg

Shanghai in 2002 (courtesy of Henry Wai-chung Yeung)

China has 7% of global market share (pretty impressive, given its new arrival status – and compared to the US which has 22%). In 1999, less than 45,000 foreign students were enrolled in Chinese universities. In 2005, it was more than 140,000. The largest majority come from South Korea (25%) and Japan (20%) while Indonesia, Thailand and Vietnam are also source countries. More significantly, there are students also coming from the US ( in year abroad programs) and Russia. It would appear that international students see some part of their education in China being a strategic investment. This could be enhanced if China engages with the Bologna Process – at present for its own internal restructuring purposes – but it might also make it a more attractive destination for European students, as well as vice versa.

Singapore has 2% of market share. In 2005, 66,000 foreign students were enrolled. While historically Singapore higher education enrolled students from around the region, its recent Singapore Global Schoolhouse strategy has meant that it has sought to recruit students globally. The OBHE in their report noted:

It is Singapore’s demography that makes it an especially attractive destination for international students, especially those coming from Asia and the South Pacific. With a population consisting of ethnic Indias, Malays and Chinese, Singapore has the capacity to provide regional students with a ‘Western’ education in a familiar socio-cultural environment.

However, recent developments in Singapore, most notably the withdrawal of the University of New South Wales Asia campus from Singapore, suggests that there is considerable volatility in the market, and one that might make future investors and students somewhat cautious.

Malaysia has 2% of global market share – like Singapore. Traditionally its international students have come from around the region – with Chinese students accounting for around 35%. However, over the past decade, Malaysia has tried not only to invest in its own institutions and dissuade its own nationals from leaving, but it has managed to make some inroads in on the Middle East – a fast growing region with ambitions of its own. It has also set up a campus in Gaborone, Botswana – in order to create a bridge into the African region.

While highly ambitious, the OBHE reports that:

…bureaucratic difficulties are arguably impeding Malaysia’s competitive progress in the global education market….[this will require] substantial funding toward the development of quality assurance schemes…

All three players – China, Singapore and Malaysia – should be regarded as serious emerging contenders in the global higher education market, not only because of their potential attractiveness (for instance fees, living costs, language of instruction) but also because, historically, they have been major sending countries. If each of these contenders is able to generate a sufficiently attractive environment to keep their own nationals at home, as well as entice foreign fee-payers, this will surely create a major headache for countries (such as the UK with Malaysian students) who have depended on these students for their own growth.

Susan Robertson

New reactions to OECD drive to establish a “worldwide higher education assessment system”

Another busy day for the OECD’s Directorate for Education, especially its director, Barbara Ischinger, and Andreas Schleicher, the head of its Indicator and Analysis Division. Both Inside Higher Ed and the Chronicle of Higher Education have lengthy stories today about the OECD’s role in seeking to establish a “worldwide higher education assessment system”, despite the diversity of resources and systems that exist across global space, and widely varying views on the efficacy of assessment systems at the tertiary level. News was stirred up following some recent OECD meetings on this issue.

These initial reactions are obviously mediated by the presence of higher education media outlets in the USA and Europe, with views underlain by the ongoing politics of two other territory-spanning governance initiatives with assessment elements – the Bologna Process (in Europe) and the Spellings Commission (in the USA).

The stories are well developed and it is best advised to read them directly. The Chronicle article, in particular, identifies critical views on this initiative from the perspective of organizations like the American Council on Education, the European University Association, the Council for Higher Education Accreditation, the National Association of State Universities and Land-Grant Colleges, while Inside Higher Ed also highlighted critical commentary from the Institute for Higher Education Policy.