A New Academic Year Begins

Well, I survived the first week of the 2012-2013 academic year as new department chairperson, as well as the near doubling of student numbers in my new (as of 2011) online course (Geography 340 — World Regions in Global Context).  The experience of creating an online course from scratch has been a fascinating one; a form of ‘learning while doing’ that I highly recommend. The experience reinforced the point made in many reports and articles that it takes on average three times as much effort and money to create a new online course, and that they require a lot of ongoing effort to run if you want students to be engaged and actually learn. I’m also curious, after the last 1.5 years of working on this, how many of the ‘disruptive innovation’ propagandists have ever taught an online course, or run an online education program.  Not many, I would wager.

The prior week included a great half-day event for department chairs, center directors, deans, etc., at UW-Madison deemed the ‘Leadership Summit.’ The event kicked off when our Interim Chancellor, David Ward (and former president of the American Council on Education) gave a fantastic context-oriented talk about the past, present and future of public higher education. David Ward has a unique ability to be analytical but also identify challenges and opportunities that are tangible and can be worked with. There aren’t many people who can be critical about how we are still trapped in many “Victorian-” and “Edwardian-era” structures and assumptions (e.g., the timetable), while at the same time flag how we are making progress on a number of fronts and we have some real opportunities to keep the ‘public’ in public higher education. It’s hard for me to summarize all of the points of his talk, but suffice it to say Ward was implying educators need to be both more aware of the structural changes underway in higher education (including outside the U.S.), but at the same time we have no choice but to rely on ourselves to fashion realistic solutions to the stress-points that exist. In short, the pendulum has fallen off the pin that enabled cyclical change for decades past, and we’re shifting into a very different era; one with no explicit social compact about the ideal balance of public and private support for the U.S. higher education system. Ward is too experienced and smart to latch onto the ‘disruptive innovation’ hype circulating through U.S. universities right now, and instead advocates bottom-up forms of education innovation that we, as faculty and staff, delineate and push forward.

Besides these start-of-term reflections, I also wanted to inform readers of GlobalHigherEd that my May 2012 entry titled ‘International Consortia of Universities and the Mission/Activities Question‘ has just been updated.  Representatives of several consortia that were not flagged in the original entry contacted me so I’ve added them the long list of consortia in the latter half of the entry. Debates continue in internationalization circles about the efficacy of international consortia of universities and this entry is designed to provide some food for fodder to further these debates.

Finally, I wanted to let you know that one of the OECD’s flagship reports Education at a Glance 2012: OECD indicators — will be released this coming Tuesday and I’ve got an embargoed copy that is chock-a-block full of insights. I am working up a new entry about the 2012 report that is modeled, to a degree, on my September 2011 entry titled ‘International student mobility highlights in the OECD’s Education at a Glance 2011.’ If I can hide behind my office door for some of Monday I’ll post the new entry on Tuesday (or Wednesday at the latest).

Best wishes at the start of the new academic year (at least here in North America)!

Kris Olds

Searching for the Holy Grail of learning outcomes

Editor’s note: I’m back after a two month hiatus dealing with family health care challenges on top of some new administrative duties and a new online course. Thank you very much for bearing with the absence of postings on GlobalHigherEd.

Today’s entry was kindly contributed by John Douglass, Gregg Thomson, and Chun-Mei Zhao of the Center for Studies in Higher Education, UC Berkeley. This fascinating contribution should be viewed in the context of some of our earlier postings on learning outcomes, including:

as well as Scott Jaschik’s recent article ”Tuning’ History‘ in Inside Higher Ed (13 February 2012).

Today’s entry is a timely one given debates about the enhanced importance of assessing learning outcomes at a range of scales (from the intra-departmental right up to the global scale). In addition, please note that this entry is adopted from the article Douglass, J.A., Thomson, G., Zhao, C. ‘The Learning Outcomes Race: the Value of Self-Reported Gains in Large Research Universities, Higher Education, February 2012.

Responses, including guest entries, are most welcome!

Kris Olds

~~~~~~~~~~~~~~~~~~~~~

It’s a clarion call. Ministries of education along with critics of higher education institutions want real proof of student “learning outcomes” that can help justify large national investments in their colleges and universities. How else to construct accountability regimes with real teeth? But where to find the one-size-fits-all test?

In the US, there is a vehicle that claims it can do this – the Collegiate Learning Assessment (CLA) test. In its present form, the CLA is given to a relatively small sample group of students within an institution to supposedly “assess their abilities to think critically, reason analytically, solve problems and communicate clearly and cogently.” The aggregated and statistically derived results are then used as a means to judge the institution’s overall added value. In the words of the CLA’s creators, the resulting data can then “assist faculty, department chairs, school administrators and others interested in programmatic change to improve teaching and learning, particularly with respect to strengthening higher order skills.” But can it really do this?

The merit of the CLA as a true assessment of learning outcomes is, we dare say, debatable. In part, the arrival and success of the CLA is a story of markets. In essence, it is a successfully marketed product that is fulfilling a growing demand with few recognized competitors. As a result, the CLA is winning the “learning outcomes race,” essentially becoming the “gold standard” in the US.

But we worry that the CLA’s early success is potentially thwarting the development of other valuable and more nuanced alternatives – whether it be other types of standardized tests that attest to measuring the learning curve of students, or other approaches such as student portfolios, contextually designed surveys on student experience, and alumni feedback.

The search for the Holy Grail to measure learning gains started in the US, but the Organisation for Economic Co-operation and Development (OECD) wants to take it global. Here we tell a bit of this story and raise serious questions regarding the validity of the CLA, this global quest, and suggest there are alternatives.

The OECD Enters the Market

In 2008, the OECD began a process to assess if it might develop a test for use internationally. A project emerged: the Assessment of Higher Education Learning Outcomes (AHELO) program would assess the feasibility of capturing learning outcomes valid across cultures and languages, and in part informed by the OECD’s success in developing the Programme for International Student Assessment (PISA) – a widely accepted survey of the knowledge and skills essential of students near the end of the compulsory education years.

The proclaimed objective of the AHELO on-going feasibility study is to determine whether an international assessment is “scientifically and practically possible.” To make this determination, the organizers developed a number of so-called study “strands.” One of the most important is the “Generic Strand,” which depends on the administration of a version of the CLA to gauge “generic skills” and competences of students at the beginning and close to the end of a bachelor’s degree program. This includes the desire to measure a student’s progression in “critical thinking, the ability to generate fresh ideas, and the practical application of theory,” along with “ease in written communication, leadership ability, and the ability to work in a group, etc.” OECD leaders claim the resulting data will be a tool for the following purposes:

  • Universities will be able to assess and improve their teaching.
  • Students will be able to make better choices in selecting institutions – assuming that the results are somehow made available publicly.
  • Policy-makers will be assured that the considerable amounts spent on higher education are spent well.
  • Employers will know better if the skills of the graduates entering the job market match their needs.

Between 10,000 and 30,000 students in more than 16 countries take part in the administration of the OECD’s version of the CLA. Full administration at approximately 10 universities in each country is scheduled for 2011 through December 2012.

AHELO’s project leaders admit the complexity of developing learning outcome measures, for example, how to account for cultural differences and the circumstances of students and their institutions? “The factors affecting higher education are woven so tightly together that they must first be teased apart before an accurate assessment can be made,” notes one AHELO publication.

By March 2010, and at a cost of €150,000 each, the ministries of education in Finland, Korea, Kuwait, Mexico, Norway and the United States agreed to commit a number of their universities to participate in the Generic Strand (i.e. the OECD version of the CLA) of the feasibility study. The State Higher Education Executive Officers – an American association of the directors of higher education coordinating and governing boards – is helping to coordinate the effort in the US. Four states have agreed to participate, including Connecticut, Massachusetts, Pennsylvania, and Missouri. A number of campuses of the Pennsylvania State University agreed to participate in the OECD’s version of the CLA with the goal of a spring 2012 administration.

However, the validity and value of CLA is very much in question and the debate over how to measure learning outcomes remains contentious. Many institutions, including most major US research universities, view with skepticism the methodology used by the CLA and its practical applications in what are large institutions, home to a great variety of disciplinary traditions.

The Validity of the Collegiate Learning Assessment (CLA)?

A product of the Council for Aid for Education (CAE), the CLA is a written test that focuses on critical thinking, analytic reasoning, written communication, and problem solving administered to small random samples of students, who write essays and memoranda in response to test material they have not previously seen.  The CAE is technically a non-profit, but has a financial stake in promoting the CLA has emerged as its primary product, much like the Educational Testing Services that hawks the SAT.

In the US, the standard administration of CLA involves a cross-sectional sample of approximately 100 first-year students and another 100 fourth-year seniors. It is necessary to keep the sample size small because scoring the narrative is labor intensive. With such a small sample size, there is no guarantee that a longitudinal approach in which the same students are tested will yield enough responses.

CLA proponents justify the cross-sectional approach because students in US colleges and universities often transfer or do not graduate in a four-year period. The cross-sectional design also has the convenience that results can be generated relatively quickly, without having to wait for a cohort to matriculate to their senior year.

Test results derived from these samples are used to represent an institution-wide measure of a university or college’s contribution (or value-added) to the development of its students’ generic cognitive competencies.  Based on these results, institutions can then be compared with one another on the basis of their relative value-added performance.

Proponents of the CLA test claim its value based on three principles:

  • First, for accountability purposes, valid assessment of learning outcomes for students at an institution is only possible by rigorously controlling for the characteristics of those students at matriculation.
  • Second, by using SAT scores as the control for initial student characteristics, it is possible to calculate the value-added performance of the institution, which is a statistically derived score indicating how the institution fares against what it is expected in terms of student learning. This is done by comparing two value-added scores: one is the actual score, which is the existent difference between freshman and senior CLA test performance; and the other is the predicted score, which is the statistically yielded freshman and senior difference based on student characteristics at entry.
  • Third, this relative performance, i.e., the discrepancy between the actual and predicted value-added scores, can in turn be compared to the relative performance achieved at other institutions. Hence the CLA test has accomplished a critical feat in the learning outcomes pursuit: it produces a statistically derived score that is simple and “objective” and that can be used to compare and even rank institutions on how well a college is performing in terms of student learning.

Prominent higher education researchers have challenged the validity of the CLA test on a number of grounds. For one, the CLA and the SAT are so highly correlated. The amount of variance in student learning outcomes after controlling for SAT scores is incredibly small. Most institutions’ value-added will simply be in the expected range and indistinguishable from each other. Hence, why bother with the CLA.

The CLA results are also sample-dependent. Specifically, there is a large array of uncontrollable variables related to student motivation to participate in and do well on the test. Students who take CLA are volunteers, and their results have no bearing on their academic careers. How to motivate students to sit through the entire time allotted for essay writing and to take seriously their chore? Some institutions provide extra-credit for taking the test, or provide rewards for its completion. At the same time, self-selection bias may be considerable. On the other hand, there are concerns that institutions may try to game the test by selecting high achievement senior year students. High stakes testing is always subject to gaming. There is no way to avoid institutions cherry-picking – purposefully selecting students who will help drive up learning gain scores.

Other criticisms center on the assumption that the CLA has fashioned a test of agreed-upon generic cognitive skills that is equally relevant to all students. But recent findings suggest that CLA results are, to some extent, discipline-specific. As noted, because of the cost and difficulty of evaluating individual student essays, the design of the CLA relies upon a rather small sample size to make sweeping generalizations about overall institutional effectiveness, it provides very little if any useful information at the level of the major.

To veterans in the higher education research community, the “history lessons” of earlier attempts to rank institutions on the basis of “value-added” measures are particularly telling. There is evidence that all previous attempts at large-scale or campus-wide assessment in higher education on the basis of value-added measures have collapsed, in part due to the observed instability of the measures. In many cases, to compare institutions (or rank institutions) using CLA results merely offers the “appearance of objectivity” that many stakeholders of higher education crave.

The CLA proponents respond by attempting to statistically demonstrate that much of the criticism does not apply to the CLA: for example, regardless of the amount of variance accounted for, the tightly SAT-controlled design does allow for the extraction of valid results regardless of the vagaries of specific samples or student motivation. But ultimately even if the proponents of the CLA are right and their small-sample testing program with appropriate statistical controls could produce a reliable and valid “value-added” institutional score, the CLA might generate meaningful data in a small liberal arts college, but it appears of very limited practical utility in large and complex universities.

Why? First, the CLA does not pinpoint where exactly a problem lies and which department or which faculty members would be responsible to address the problem. CLA claims that, in addition to providing an institution-wide “value-added” score, it serves as a diagnostic tool designed “to assist faculty in improving teaching and learning, in particular as a means toward strengthening higher order skills.”

But for a large, complex research university like the University of California, Berkeley, this is a wishful proposition. Exactly how would the statistically derived result (on the basis of a standard administration of a few hundred freshman and senior test-takers) that, for example, the Berkeley campus was performing more poorly than expected (or relatively more poorly than, say, the Santa Barbara campus in the UC system) assist the Berkeley faculty in improving its teaching and learning?

Second, CLA does not provide enough information on how well a university is doing in promoting learning among students from various backgrounds and life circumstances.  This assessment approach is incompatible with the core value of diversity and access championed by the majority of large, public research universities.

Embarking on a “Holy Grail–like” quest for a valid “value-added” measure is, of course, a fundamental value choice. Ironically, the more the CLA enterprise insists that the only thing that really matters for valid accountability in higher education is a statistical test of “value-added” by which universities can be scored and ranked, the more the CLA lacks a broader, “systemic validity,” as identified by Henry Braun in 2008:

Assessment practices and systems of accountability are systemically valid if they generate useful information and constructive responses that support one or more policy goals (Access, Quality, Equity, Efficiency) within an education system without causing undue deterioration with respect to other goals.

“Valid” or not, the one-size-fits-all, narrow standardized test “value-added” program of assessment in higher education promises little in the way of “useful information and constructive responses.” A ranking system based on such could only have decidedly pernicious effects, as Cliff Adelman once observed. In Lee Shulman’s terms, the CLA is a “high stakes/low yield” strategy where high stakes corrupt the very processes they are intended to support.

For the purposes of institution-wide assessment, especially for large, complex universities, we surmise that the net value of CLA’s value-added scheme would be at best unconstructive, and at worst generating inaccurate information used for actual decision-making and rankings.

One Alternative?

In a new study published in the journal Higher Education, we examine the relative merits of student experience surveys in gauging learning outcomes by analyzing results from the data from the Student Experience in the Research University (SERU) Consortium and Survey based at the Center for Studies in Higher Education at UC Berkeley. There are real problems with student self-assessments, but there is an opportunity to learn more than what is offered in standardized tests.

Administered since 2002 as a census of all students at the nine undergraduate campuses of the University of California, the SERU survey generates a rich data set on student academic engagement, experience in the major, participation in research, civic and co-curricular activities, time use, and overall satisfaction with the university experience. The survey also provides self-reported gains on multiple learning outcome dimensions by asking students to retrospectively rate their proficiencies when they entered the university and at the time of the survey. SERU results are then integrated with institutional data.

In 2011, the SERU Survey was administered at all nine University of California undergraduate campuses, and to students at an additional nine major research universities in the US, all members of the Association of American Universities (AAU), including the Universities of Michigan, Minnesota, Florida, Texas, Rutgers, Pittsburgh, Oregon, North Carolina and the University of Southern California. (A SERU-International Consortium has recently been formed with six “founding” universities located in China, Brazil, the Netherlands, and South Africa.)

SERU is the only nationally administered survey of first-degree students in the US that is specifically designed to study policy issues facing large research universities. It is also one of four nationally recognized surveys for institutional accountability for research universities participating the Voluntary System of Accountability initiative in the US. The other surveys include the College Student Experiences Questionnaire, the College Senior Survey, and the National Survey of Student Engagement.

The technique of self-reported categorical gains (e.g., “a little”, “a lot”) typically employed in student surveys has been shown to have dubious validity compared to “direct measures” of student learning. The SERU survey is different. It uses a retrospective posttest design for measuring self-reported learning outcomes that yields more valid data. In our exploration of that data, we show connections between self-reports and student GPA and provide evidence of strong face validity of learning outcomes based on these self-reports.

The overall SERU survey design has many other advantages, especially in large, complex institutional settings. It includes the collection of extensive information on academic engagement as well as a range of demographic and institutional data. The SERU dataset sheds light on both the variety of student backgrounds and the great variety of academic disciplines with their own set of expectations and learning goals.

Without excluding other forms of gauging learning outcomes, we conclude that designed properly, student surveys offer a valuable and more nuanced alternative in understanding and identifying learning outcomes in the university environment.

But we also note the tension between the accountability desires of governments and the needs of individual universities who should focus on institutional self-improvement. One might hope that they would be synonymous. But how to make ministries and other policymakers more fully understand the perils of a silver bullet test tool?

The Lure of the Big Test

Back to the politics of the CLA. This test is a blunt tool, creating questionable data that serves immediate political ends. It seems to ignore how students actually learn and the variety of experiences among different sub-populations. Universities are more like large cosmopolitan cities full of a multitude of learning communities, as opposed to a small village with observable norms. In one test run of the CLA, a major research university in the US received data that showed students actually experienced a decline in their academic knowledge – a negative return? It seems highly unlikely.

But how to counteract the strong desire of government ministries, and international bodies like the OECD, to create broad standardized tests and measures of outcomes? Even with the flaws noted, the political momentum to generate a one-size-fits-all model is powerful. The OECD’s gambit has already captured the interest and money of a broad range of national ministries of education and the US Department of Education.

What are the chances the “pilot phase” will actually lead to a conclusion to drop the pursuit of an higher education version of PISA? Creating an international “gold standard” for measuring learning outcomes appears too enticing, too influential, and too lucrative for that to happen – although we obviously cannot predict the future.

It may very well be that data and research offered in our study that uses student survey responses will be viewed as largely irrelevant in the push and pull for market position and political influence. Government’s love to rank and this might be one more tool to help encourage institutional differentiation – a goal of many nation-states.

But for universities who desire data for making actionable improvement we argue that student surveys, if properly designed, offer one of the most useful and cost-effective tools. They also offer a means to combat simplistic rankings generated by CLA and similar tests.

John Douglass, Gregg Thomson, and Chun-Mei Zhan

International student mobility highlights in the OECD’s Education at a Glance 2011

Education at a Glance 2011 was released today by the OECD. The report is replete with data about education systems, patterns, trends, etc., and is well worth reading.

Free copies of the full report (497 pp) and the highlights version (98 pp) are available in PDF format via the links I provided in this sentence.  An on-line summary is available here too, with links to country notes for Brazil  (in English; in Portuguese, Chile, Estonia, France (in French), Germany (in English; in German), Greece, Italy (in English; in Italian), Japan (in English, in Japanese), Korea, Mexico (in English; in Spanish), Spain (in English; in Spanish), and the United Kingdom.

While all of the sections are worth reading, I always find the data regarding international student mobility too hard to resist glancing at when the report first comes out. These six graphics, and associated highlights (all but the first extracted from the highlights version of Education at a Glance 2011) will give you a flavour of some of the noteworthy student mobility trends.  Further details regarding mobility trends and patterns can be found in the full report (pp. 318-339).

How many students study abroad?

  • In 2009, almost 3.7 million tertiary students were enrolled outside their country of citizenship, representing an increase of more than 6% on the previous year.
  • Just over 77% of students worldwide who study abroad do so in OECD countries.
  • In absolute terms, the largest numbers of international students are from China, India and Korea. Asians account for 52% of all students studying abroad worldwide.

 Where do students go to study abroad?

  • Six countries – Australia, Canada, France, Germany, the United Kingdom and the United States – hosted more than half of the world’s students who studied abroad in 2009.
  • The United States saw a significant drop as a preferred destination of foreign students between 2000 and 2009, falling from about 23% of the global market share to 18%.
  • The shares of foreign students who chose Australia and New Zealand as their destination grew by almost 2%, as did that in the Russian Federation, which has become an important new player on the international education market.

How many international students stay on in the host country?

  • Several OECD countries have eased their immigration policies to encourage the temporary or permanent immigration of international students, including Australia, Canada, Finland, France, New Zealand and Norway.
  • Many students move under a free-movement regime, such as the European Union, and do not need a residence permit to remain in their country of study.
  • On average, 25% of international students who did not renew their student permits changed their student status in the host country mainly for work-related reasons.

Other complementary reports released over the last month include:

The reworking of the global higher education landscape continues to generate a wide array of ripple effects at a range of scales (from the local through to the global). While not perfect, the OECD’s annual Education at a Glance 2011 does an excellent job providing much of the available data on these trends, and on a wide array of issues and phenomenon that help to shape these mobility outcomes. A comparative perspective, after all, helps to flag the place of individual countries’ in the broader and ever evolving landscape; a landscape that countries play a significant role in both constructing, and reacting to.

Kris Olds

Global regionalism, interregionalism, and higher education

The development of linkages between higher education systems in a variety of ‘world regions’ continues apace. Developments in Europe, Asia, Africa, the Gulf, and Latin America, albeit uneven in nature, point to the desire to frame and construct regional agendas and architectures. Regionalism -– a state-led initiative to enhance integration to boost trade and security — is now being broadened out such that higher education, and research in some cases, is being uplifted into the regionalism impulse/dynamic.

The incorporation of higher education and research into the regionalism agenda is starting to generate various forms of interregionalisms as well.  What I mean by this is that once a regional higher education area or research area has been established, at least partially, relations between that region, and other regions (i.e. partners), then come to be sought after. These may take the form of relations between (a) regions (e.g., Europe and Asia), (b) a region and components of another region (e.g., Europe and Brazil; Latin America and the United States; Southeast Asia and Australia). The dynamics of how interregional relations are formed are best examined via case studies for, suffice it to say, not all regions are equals, and nor do regions (or indeed countries) speak with singular and stable voices. Moreover some interregional relations can be practice-oriented, and involve informal sharing of best practices that might not formally be ‘on the books.’

Let me outline two examples of the regionalism/interregionalism dynamic below.

ALFA PUENTES

The first example comes straight from an 8 July 2011 newsletter from the European University Association (EUA), one of the most active and effective higher education institutions forging interregional relations of various sorts.

In their newsletter article, the EUA states (and I quote at length):

The harmonisation agenda in Central America: ALFA PUENTES sub-regional project launch (July 07, 2011)

 EUA, OBREAL, HRK and university association partners from Costa Rica, Guatemala, Honduras, Panama, and Mexico gathered in Guatemala City on 27-28 June both to discuss and formally launch the sub-regional project ‘Towards a qualifications framework for MesoAmerica’, one of the three pillars of the European Commission supported structural project ‘ALFA PUENTES’ which EUA is coordinating.

Hosted by sub-regional project coordinator CSUCA (Consejo Universitario CentroAmericana), and further attended by the sub-regional coordinators of the Andean Community (ASCUN), Mercosur (Grupo Montevideo), partners discussed current higher education initiatives in Central America and how the ALFA PUENTES project can both support and build upon them.

CSUCA, created in 1948 with a mission to further integration in Central America and improve the quality of higher education in the region, has accelerated its agenda over the past 10 years and recently established a regional accreditation body. This endeavour has been facilitated by project partner and EUA member HRK (in conjunction with DAAD) as well as several other donors. The association, which represents around 20 public universities in Central America, has an ambitious agenda to create better transparency and harmonisation of degrees, and has already agreed to a common definition of credit points and a template for a diploma supplement.

Secretary General Dr Juan Alfonso Fuentes Soria stated in a public presentation of the project that ALFA PUENTES will be utilised to generate a discussion on qualifications frameworks and how this may accelerate the Central America objectives of degree convergence. European experience via the Bologna Process will be shared and European project partners as well as Latin American (LA) partners from other regions will contribute expertise and good practice.

ALFA PUENTES is a three-year project aimed at both supporting Latin American higher education convergence processes and creating deeper working relationships between European and Latin American university associations. Thematic sub-regional projects (MesoAmerica, Andean Community and Mercosur) will be connected with a series of transversal activities including a pan-Latin American survey on change forces in higher education, as well as two large Europe-LA University Association Conferences (2012 and 2014).

This lengthy quote captures a fascinating array of patterns and processes that are unfolding right now; some unique to Europe, some unique to Latin America, and some reflective of synergy and complementarities between these two world regions.

TUNING the Americas

The second example, one more visual in nature, consists of a recent map we created about the export of the TUNING phenomenon. As we have noted in two previous GlobalHigherEd entries:

TUNING is a process launched in Europe to help build the European Higher Education Area (EHEA). As noted on the key TUNING website, TUNING is designed to:

Contribute significantly to the elaboration of a framework of comparable and compatible qualifications in each of the (potential) signatory countries of the Bologna process, which should be described in terms of workload, level, learning outcomes, competences and profile.

The TUNING logic is captured nicely by this graphic from page 15 of the TUNING General Brochure.

Over time, lessons learned about integration and educational reform via these types of mechanisms/technologies of governance have come to be viewed with considerable interest in other parts of the world, including Africa, North America, and Latin America. In short, the TUNING approach, an element of the building of the EHEA, has come to receive considerable attention in non-European regions that are also seeking to guide their higher educational reform processes, and as well as (in many cases) region-building processes.

As is evident in one of several ‘TUNING Americas’ maps we (Susan Robertson, Thomas Muhr, and myself) are working on with the support of the UW-Madison Cartography Lab and the WUN, the TUNING approach is being taken up in other world regions, sometimes with the direct support of the European Commission (e.g., in Latin America or Africa). The map below is based on data regarding the institutional take-up of TUNING as of late 2010.


Please note that this particular map only focuses on Europe and the Americas, and it leaves out other countries and world regions. However, the image pasted in below, which was extracted from a publicly available presentation by Robert Wagenaar of the University of Groningen, captures aspects of TUNING’s evolving global geography.

Despite the importance of EU largesse and support, it would be inaccurate to suggest that the EU is foisting TUNING on world regions; this is the post-colonial era, after all, and regions are voluntarily working with this European-originated reform mechanism and Europe-based actors. TUNING also only works when faculty/staff members in higher education institutions outside of Europe drive and then implement the process (a point Robert Wagenaar emphasizes). Or look, for example, at the role of the US-based Lumina Foundation in its TUNING USA initiative. Instead, what we seem to have is capacity building, mutual interests in the ‘competencies’ and ‘learning outcomes’ agenda, and aspects of the best practices phenomenon (all of which help explain the ongoing building of synergy between the OECD’s AHELO initiative with the European/EU-enabled TUNING initiative). This said, there are some ongoing debates about the possible alignment implications associated with the TUNING initiative.

These are but two examples of many emerging regionalisms/interregionalisms in the global higher education landscape; a complicated multiscalar phenomenon of educational reform and ‘modernization,’ and region building, mixed in with some fascinating cases of relational identity formation at the regional scale.

Kris Olds (with thanks to Susan Robertson & Thomas Muhr)

The OECD & Higher Education in a World Changed Utterly

The Organisation for Economic Co-operation and Development (OECD) is an important part of the ‘learning machinery’ that both sheds light on and guides higher education reform. While this international organization does not have jurisdictional authority over higher education regulations and practices within nation states, it does has a unique capacity to conduct research, generate debates, benchmark, provide advice, convene, and respond to the expressed needs of its member states.

A case in point is the annual Education at a Glance report that the OECD issues every September. And the Education at a Glance report is just that – a report – yet a report that many governments feel a need to both support (via data provision) yet respond to (and quickly!) when the report’s findings highlight potentially significant weaknesses in their higher education systems.

Needless to say, the context in which higher education reform is being undertaken will shape the agenda of organizations like the OECD. It is perhaps not surprising, then, that the impact of the economic crisis upon higher education systems and practices is high up on the list of priorities for the OECD’s Directorate for Education.

For example, the OECD’s Programme on Institutional Management in Higher Education (IMHE) is sponsoring a conference this week titled Higher Education in a World Changed Utterly. Doing More with Less (Paris, 13-15 September 2010). While over 400 people will be attending the event, the majority of us can engage with the conference via:

  • The conference programme and 42 pp discussion paper.
  • A live conference webcast (13-15 September) which will provide on-demand feeds after the event ends.
  • An active line-up of conference blogs via the OECD’s educationtoday site.
  • A new (as of 15 September) social media project called Raise Your Hand which seeks the views of all education stakeholders to a single question: “What is the most important action we need to take in education today?”.

One of the interesting dimensions of the OECD that I struggle to convey to my (American) undergraduate students is that the OECD is ‘our’ organization, not (as a surprising number of them think) an autonomous free agent running amok on the roads towards global government or corporate hegemony. Rather, it is an international organization that member states pay for and direct, primarily via membership on the OECD Council.

As the image to the left implies, The Directorate for Education is part of the OECD Secretariat:

The Secretariat in Paris is made up of some 2 500 staff who support the activities of committees, and carry out the work in response to priorities decided by the OECD Council. The staff includes economists, lawyers, scientists and other professionals.

The Directorate for Education works hard to ensure that initiatives it is involved in are framed such that they generate practical and important outcomes for actual institutions (including governments and universities).

Learning via OECD initiatives and products (events, reports, missions/consultancies, etc.) are increasingly focusing in on issues that are hugely important to universities given the changing nature of the knowledge-based economy, and the rising importance of innovation, at a variety of scales, to socio-economic development. Moreover, the OECD is very open about its role in shaping policy reform agendas to more effectively manage the globalization process. As the OECD notes:

The [analytical focus] matrix is moving from consideration of each policy area within each member country to analysis of how various policy areas interact with each other, across countries and even beyond the OECD area. How social policy affects the way economies operate, for example. Or how globalisation will change the world’s economies by opening new perspectives for growth, or perhaps trigger resistance manifested in protectionism.

As it opens to many new contacts around the world, the OECD will broaden its scope, looking ahead to a post-industrial age in which it aims to tightly weave OECD economies into a yet more prosperous and increasingly knowledge-based world economy.

This explains, for example, the OECD’s role in policy work on cross-border higher education, the Assessment of Higher Education Learning Outcomes (AHELO) programme, the Bologna process, and so on.

In closing, events like IMHE’s Higher Education in a World Changed Utterly: Doing More with Less are windows into the OECD’s architecture, its modus operandi, as well as tangible events that bring together key thinkers about the globalisation of higher education: for these reasons the conference is worth exploring, if only on a virtual level.

More importantly, we need to engage with these types of international organizations, their component parts (e.g., IMHE, an organization that universities can join), and their events, for they are our inventions. We really don’t have many inter-governmental (yet open to other stakeholders) ‘learning machines’, and the challenges are such that we can certainly benefit from more informed debate and strategic planning!

Kris Olds

The OECD’s AHELO: a PISA for higher education?

Editor’s note: greetings from Paris, one of the ‘calculative centres’ associated with the globalization of higher education.  One of the key institutions associated with this development process is the Paris-based Organisation for Economic Co-operation and Development/Organisation de coopération et de développement économiques (OECD/OCDE) given its work on higher education, as well as on related issues such as innovation, science and technology, and so on.

See below for a recent presentation about the OECD’s Assessment of Higher Education Learning Outcomes (AHELO) initiative. This presentation is courtesy of Diane Lalancette, an Analyst with the AHELO initiative, OECD – Directorate for Education.

In ‘tweeting‘ about this presentation a few weeks ago, I detected that a few people sent it on while calling AHELO “a PISA for higher education”. PISA, for those of you who don’t know, is the OECD’s Programme for International Student Assessment, hence the PISA acronym. As the OECD puts it:

PISA assesses how far students near the end of compulsory education have acquired some of the knowledge and skills that are essential for full participation in society. In all cycles, the domains of reading, mathematical and scientific literacy are covered not merely in terms of mastery of the school curriculum, but in terms of important knowledge and skills needed in adult life.

Yet as Diane Lalancette put it (in a note to me):

While AHELO takes a similar approach to PISA in that it will assess student knowledge and skills directly, it is a feasibility study and will not provide information at national or system level like PISA does.

In short, the focus of the AHELO learning outcomes measures will be at the level of institutions and will not allow for comparisons at national levels, one of the key elements that can put national governments on edge (depending on how well their compulsory education systems do in a relative sense).

Our thanks to the Diane Lalancette and Richard Yelland of the OECD’s Directorate of Education for permission to post the presentation below.

Kris Olds

~~~~~~~~~~~~

‘Generation crunch’ (or, what is happening to graduate jobs and the ‘graduate premium’ in the UK)

Early this week, the Centre for Enterprise (CFE) in the UK released their report Generation Crunch: the demand for recent graduates on SME.

The report is essentially concerned with the employment prospects for university graduates in Small to Medium Enterprises (SMEs) and makes for particularly interesting reading.

Focusing on SME’s as sources of employment is important because, as they note;

While there is a relatively clear picture of this demand from the public sector and larger businesses, much less is known about the demand from SMEs. This matters, as there are an estimated 4.8 million SMEs in the UK, employing 23.1 million people and together they account for 99% of all enterprises.

Several findings stand out in their report. The first is that the CFE’s survey of over 500 SMEs in the East Midlands region of the UK highlighted confusion over the graduate ‘brand’ with 29% incorrectly identifying A-Levels (that is an upper or senior school exit qualification in the UK) as a graduate qualification.

Even when furnished with the correct definition of a graduate level qualification, it is clear that the recruitment of Generation Crunch graduates is a minority pursuit — just 11% of SMEs had taken on a recent graduate in the past 12 months and only 12% indicated they would do so in the next 12 months.

Almost a third (32%) of those firms that were not hiring graduates reported that nothing would make them recruit a graduate in the next year and the reason for most was a lack of demand, rather than an inadequate or unsuitable supply of graduates.

In an interview this week with the Guardian, James Kewin, joint managing director of the Centre for Enterprise, is reported as saying:

There is not a clear or shared understanding of the term graduate among small and medium size businesses. There is a clear need to rationalise the plethora of qualification frameworks, levels and agencies that currently litter the education and skills landscape and to develop an easily understandable summary of what is and what isn’t a graduate-level qualification.

He said efforts to boost the proportion of graduates in jobs could have only a marginal impact. “Most small and medium size businesses that do not recruit reported that lack of demand, rather than inadequate and unsuitable supply, was their primary reason for not recruiting,” he said. “This suggests that the trend for increasing the employability skills of graduates will, in isolation, have only a marginal impact. The same is true of initiatives aimed at promoting, subsidising or improving access to graduate recruits. While they may lead to a short-term reduction in graduate unemployment, they do not address the fundamental barrier – lack of business need – that prevents most small and medium size businesses from recruiting.”

This is also particularly damaging news for the UK government at the current time, given that it is busy trying to encourage more students to enrol in university studies.

In the Foreword to the recent new ‘Framework for Higher Education’, Higher Ambitions, released in late 2009, the Minister for Business, Innovation and Skills – Peter Mandelson – promised that:

A university education can be an entry ticket to the best paid employment and a preparation for a globalised world of work (p. 24).

What makes the CFE’s research potential dynamite is the implications it has for the government’s review currently being led by Lord Browne, former Head of BP, on lifting the cap on university tuition fees in English, Northern Ireland, and Welsh universities with English students in them.

Image courtesy of Bianca Soucek

Lifting the cap on tuition fees is sold to students as compensated for by a  ‘graduate premium’. In other words, students who invest in university undergraduate studies (and with increased student fees they are investing more of their own funds in their studies) will continue to earn ‘considerably more’ over a lifetime than those who don’t.

Last year GlobalHigherEd reported on the OECD’s statistical evidence about declining graduate premiums, despite the OECD’s own strong claims about the positive economic returns from investing in university studies. We pointed out that the evidence is clear; the value of the premium holds only as long as its value as a positional good is secured. The greater the number of students entering university, the more the value of the premium reduces.

This, of course, is what is behind Lord Browne’s observations in early December, 2009 and reported by BBC news. Lord Browne  calculated the graduate premium as being  1/4  (£100,000) of the one claimed by government (£400,000); this inflated figure was also the one used by government when it justified its increase in the cap on university tuition fees (from £1,225 to £3,225) which was implemented in 2007. Had the value of a university premium declined, the press asked? No, said the government! The question, of course, is who are we talking about? Clearly everyone is not in the same boat, and some might not be in a boat at all.

In 2007 a study on the economics of a degree by PricewaterhouseCoopers for Universities UK produced a different figure for the ‘graduate premium’  –   of an average of £160,000. This study pointed out, however, that the ‘average’ concealed important differences between students – with medical and dentistry students earning a ‘graduate premium’ of around £340,000, humanities students around £51, 500, and arts students  £35,000. Now it is not difficult to do the maths on this one. Investing in an arts degree does not make for good economic sense.  Indeed PricewaterhouseCooper’s report that males with an arts undergraduate degree will earn 4% less than males who hold an A-level qualification only.

When faced with…

  • limited job prospects if the CFE’s data on SME’s and graduate employment is anything to go by
  • likely cuts in UK public sector spending as the government manages its worst financial crisis since the 1930s
  • knowledge that subject of study, gender, social class, income, non-traditional entry qualifications, and so on, can mediate the value of a ‘graduate premium’ (positively and negatively) and therefore should be placed into the mix of any hard-edged economic consideration
  • a poor return on investing in a university degree if studies are in areas like arts and humanities
  • a likely increase in the cost of university tuition after the election to inject funding into a limping university sector

…some students and their families could be forgiven for coming to the conclusion that a university education at all costs is simply not worth it as an economic investment in their future. This conclusion is likely to apply to families in other OECD countries, and not just the UK.

Governments might be better served if they came clean on the economic argument. Instead it should emphasize the value of university studies for social, cultural and political reasons (indeed the OECD’s recent Education at a Glance 2009, p. 176 cites figures which show that ‘political interest’ is enhanced by a 20 % point increase in probability when an individual has a tertiary education).

By recovering, valuing, and making prominent, these dimensions and outcomes of intellectual inquiry, we could then put such knowledge and capability to work on important global problems, like poverty, climate change, sustainability, and building more equitable and socially cohesive communities.

Susan Robertson

International students in the UK: interesting facts

Promoting and responding to the globalisation of the higher education sector are a myriad array of newer actors/agencies on the scene, including the UK Higher Education International Unit. Set up in 2007, the UK HE International Unit aims to provide:

credible, timely and relevant analysis to those managers engaged in internationalisation across the UK HE sector, namely – Heads of institutions, pro-Vice Chancellors for research and international activities; Heads of research/business development offices and International student recruitment & welfare officers.

The UK International Unit both publishes and profiles (with download options) useful analytical reports, as well as providing synoptic comparative pictures on international student recruitment and staff recruitment on UK higher education institutions and their competitors. Their newsletter is well worth subscribing to.

Readers of GlobalHigherEd might find the following UK HE International Unit compiled facts interesting:

  • In 2004, 2.7 million students were enrolled in HEIs outside their countries of citizenship. In 2005-06, six countries hosted 67% of these students (23% in the US, 12% in the UK, 11% in Germany, 10% in France, 7% in Australia, and 5% in Japan). (UNESCO, 2006)
  • New Zealand’s share of the global market for international students increased more than fourfold between 2000 and 2006. Australia’s increased by 58% and the UK’s by 35%. (OECD, 2006)
  • There were 223,850 international students (excluding EU) enrolled at UK HEIs in 2005-06, an increase of 64% in just five years. There were a further 106,000 EU students in 2005-06. (HESA, 2006)
  • International students make up 13% of all HE students in the UK, third in proportion only to New Zealand and Australia. For those undertaking advanced research programmes, the figure is 40%, second only to Switzerland. The OECD averages are 6% and 16%, respectively. (OECD, 2006)
  • UK HEIs continue to attract new full-time undergraduates from abroad. The number of new international applicants for entry in 2007 was 68,500, an increase of 7.8% on the previous year. The number of EU applicants rose by 33%. (UCAS, 2007)
  • Students from China make up almost one-quarter of all international students in the UK. The fastest increase is from India: in 2007 there were more than 23,000 Indian students in the UK, a five-fold increase in less than a decade. (British Council, 2007)
  • The number of students in England participating in the Erasmus programme declined by 40% between 1995-96 and 2004-05 – from 9,500 to 5,500. Participation from other EU countries increased during this period. However, North American and Australian students have a lower mobility level than their UK counterparts. (CIHE, 2007).

Susan Robertson

Has higher education become a victim of its own propaganda?

eh.jpgEditor’s note: today’s guest entry was kindly written by Ellen Hazelkorn, Director, and Dean of the Faculty of Applied Arts, and Director, Higher Education Policy Research Unit (HEPRU), Dublin Institute of Technology, Ireland. She also works with the OECD’s Programme for Institutional Management of Higher Education (IMHE). Her entry should be read in conjunction with some of our recent entries on the linkages and tensions between the Bologna Process and the Lisbon Strategy, the role of foundations and endowments in facilitating innovative research yet also heightening resource inequities, as well as the ever present benchmarking and ranking debates.

~~~~~~~~~

councilpr.jpgThe recent Council of the European Union’s statement on the role of higher education is another in a long list of statements from the EU, national governments, the OECD, UNESCO, etc., proclaiming the importance of higher education (HE) to/for economic development. While HE has long yearned for the time in which it would head the policy agenda, and be rewarded with vast sums of public investment, it may not have realised that increased funding would be accompanied with calls for greater accountability and scrutiny, pressure for value-for-money, and organisational and governance reform. Many critics cite these developments as changing the fundamentals of higher education. Has higher education become the victim of its own propaganda?

At a recent conference in Brussels a representative from the EU reflected on this paradox. The Lisbon Strategy identified a future in which Europe would be a/the leader of the global knowledge economy. But when the statistics were reviewed, there was a wide gap between vision and reality. The Shanghai Academic Ranking of World Universities, which has become the gold standard of worldwide HE rankings, has identified too few European universities among the top 100. This was, he said, a serious problem and blow to the European strategy. Change is required, urgently.

sciencespo.jpgUniversity rankings are, whether we like it or not, beginning to influence the behaviour of higher education institutions and higher education policy because they arguably provide a snap-shot of competition within the global knowledge industrial sector (see E. Hazelkorn, Higher Education Management and Policy, 19:2, and forthcoming Higher Education Policy, 2008). Denmark and France have introduced new legislation to encourage mergers or the formation of ‘pôles’ to enhance critical mass and visibility, while Germany and the UK are using national research rankings or teaching/learning evaluations as a ‘market’ mechanism to effect change. Others, like Germany, Denmark and Ireland, are enforcing changes in institutional governance, replacing elected rectors with corporate CEO-type leadership. Performance funding is a feature everywhere. Even the European Research Council’s method of ‘empowering’ (funding) the researcher rather than the institution is likely to fuel institutional competition.

In response, universities and other HEIs are having to look more strategically at the way they conduct their business, organise their affairs, and the quality of their various ‘products’, e.g., educational programming and research. In return for increased autonomy, governments want more accountability; in return for more funding, governments want more income-generation; in return for greater support for research, governments want to identify ‘winners’; and in return for valuing HE’s contribution to society, governments want measurable outputs (see, for example, this call for an “ombudsman” for higher education in Ireland).

European governments are moving from an egalitarian approach – where all institutions are broadly equal in status and quality – to one in which excellence is promoted through elite institutions, differentiation is encouraged through competitive funding, public accountability is driven by performance measurements or institutional contacts, and student fees are a reflection of consumer buoyancy.

But neither the financial costs nor implications of this strategy – for both governments and institutions – have been thought through. The German government has invested €1.9b over five years in the Excellence Initiative but this sum pales into insignificance compared with claims that a single ‘world class’ university is a $1b – $1.5b annual operation, plus $500m with a medical school, or with other national investment strategies, e.g., China’s $20b ‘211 Project’ or Korea’s $1.2b ‘Brain 21’ programme, or with the fund-raising capabilities of US universities (‘Updates on Billion-Dollar Campaigns at 31 Universities’; ‘Foundations, endowments and higher education: Europe ruminates while the USA stratifies‘).

Given public and policy disdain for increased taxation, if European governments wish to compete in this environment, which policy objectives will be sacrificed? Is the rush to establish ‘world-class’ European universities hiding a growing gap between private and public, research and teaching, elite and mass education? Evidence from Ireland suggests that despite efforts to retain a ‘binary’ system, students are fleeing from less endowed, less prestigious institutes of technology in favour of ‘universities’. At one stage, the UK government promoted the idea of concentrating research activity in a few select institutions/centres until critics, notably the Lambert report and more recently the OECD, argued that regionality does matter.

Europeans are keen to establish a ‘world class’ HE system which can compete with the best US universities. But it is clear that such efforts are being undertaken without a full understanding of the implications, intended and unintended.

Ellen Hazelkorn

OECD ministers meet in January to discuss possible evaluation of “outcomes” of higher education

Further to our last entry on this issue, and a 15 November 2007 story in The Economist, here is an official OECD summary of the Informal OECD Ministerial Meeting on evaluating the outcomes of Higher Education, Tokyo, 11-12 January 2008.  The meetings relate to the perception, in the OECD and its member governments, of an “increasingly significant role of higher education as a driver of economic growth and the pressing need for better ways to value and develop higher education and to respond to the needs of the knowledge society”.

Governing by numbers: the PISA effect

sotiriagrek.jpgEditor’s note: this guest entry has been kindly prepared by Dr. Sotiria Grek, Research Fellow at the Centre for Educational Sociology, University of Edinburgh. Dr Grek (pictured to the left) currently works on the ESRC funded project Governing by Numbers, which seeks to understand and explain the origins, processes and impact of the increased emphasis on measuring quality in education against standardised indicators of performance in Scotland and England. Governing by Numbers forms the UK (Scotland and England) element of the European Science Foundation collaborative research project Fabricating Quality in European Education (FabQ), which extends the focus of the research in the European education space and, more specifically, into comparative contexts in Finland, Denmark and Sweden.
~~~~~~~~~~~~~~~~~~

The Programme for International Student Assessment (PISA) is conducted in three yearly cycles and examines the knowledge and skills of 15-year-olds in compulsory education. Although PISA began as a joint study of the OECD member countries, it has developed its scope to involve non-member countries as well. Indeed, since the year 2000, when the first PISA study was conducted, more and more countries have been taking part, with the latest PISA (2006) having assessed students in 57 countries all over the world, thus involving 27 non-member participant nations (see an OECD map from the executive summary of PISA 2006 below). The international dimension of the survey, which overrides the boundaries of Europe to compare student performance of countries as diverse as the United States, Greece and Indonesia, gives PISA a particularly significant weight as an indicator of processes of education policy and governance at a national and an international, even global stage.

pisamap.jpg

Indeed the sheer scale of this enterprise may distract attention from fundamental questions about its purposes and effects. PISA is the OECD’s platform for policy construction, mediation and diffusion at a global level. The assessment of comparative system performance has direct effects on the shaping of future policy directions, and the reporting of PISA results adds to the sense of urgency in responses to it, as Nóvoa and Yariv-Mashal (2003; 425) point out:

Such researches produce a set of conclusions, definitions of ‘good’ or ‘bad’ educational systems, and required solutions. Moreover, the mass media are keen to diffuse the results of these studies, in such a manner that reinforces a need for urgent decisions, following lines of action that seem undisputed and uncontested, largely due to the fact that they have been internationally asserted.

Although it is probably too early to evaluate the impact of the publication of the latest PISA results, different cases across Europe illustrate quite different reactions: from the PISA-dominance of Finland, to the PISA-shock of Italy this time and the PISA-‘slump’, as the British press characterised it, of the UK. What is constant − and very similar to the experience after the publication of PISA 2000 and 2003 − is the acceptance of PISA in terms of the parameters and direction that it establishes and its incorporation into local policy making.

Responsiveness to PISA across the different participating nations can be seen as an instance of what Luhmann and Schorr called ‘externalisation’ (1979). That is, the reference to ‘world situations’ enables policy-makers to make the case for education reforms at home that would otherwise be contested. Thus ‘local’ policy actors are using PISA as a form of domestic policy legitimation, or as a means of defusing discussion by presenting policy as based on robust evidence. The local policy actor also signals, to an international audience, through PISA, the adherence of their nation to reform agendas, and thus joins the club of competitive nations. Moreover, the construction of PISA with its promotion of orientations to applied and lifelong learning has powerful effects on curricula and pedagogy in participating nations, and promotes the responsible individual and self-regulated subject. Finally, PISA is a major governing resource for Europe: it provides knowledge and information about systems, and implants constant comparison within the EU member states − without the need for new or explicit forms of regulation in education. This reading of PISA supports the argument about its use and meaning as a political technology: a governing resource for both the national agency and the trans-national forces of EU and the OECD.

References

Luhmann, N. and Schorr, K. E. (1979) Reflexionsprobleme im Erziehungssytem (Stuttgart, Klettcotta).

Nóvoa, A. & Yariv-Mashal, T. (2003) Comparative Research in Education: a mode of governance or a historial journey, Comparative Education, 39 (4), 423-438.

Sotiria Grek