From rhetoric to reality: unpacking the numbers and practices of global higher ed

ihepnov2009Numbers, partnerships, linkages, and collaboration: some key terms that seem to be bubbling up all over the place right now.

On the numbers front, the ever active Cliff Adelman released, via the Institute for Higher Education Policy (IHEP), a new report titled The Spaces Between Numbers: Getting International Data on Higher Education Straight (November 2009). As the IHEP press release notes:

The research report, The Spaces Between Numbers: Getting International Data on Higher Education Straight, reveals that U.S. graduation rates remain comparable to those of other developed countries despite news stories about our nation losing its global competitiveness because of slipping college graduation rates. The only major difference—the data most commonly highlighted, but rarely understood—is the categorization of graduation rate data. The United States measures its attainment rates by “institution” while other developed nations measure their graduation rates by “system.”

The main target audience of this new report seems to be the OECD, though we (as users) of international higher ed data can all benefit from a good dig through the report. Adelman’s core objective is facilitating the creation of a new generation of indicators, indicators that are a lot more meaningful and policy-relevant than those that currently exist.

Second, Universities UK (UUK) released a data-laden report titled The impact of universities on the UK economy. As the press release notes:

Universities in the UK now generate £59 billion for the UK economy putting the higher education sector ahead of the agricultural, advertising, pharmaceutical and postal industries, according to new figures published today.

This is the key finding of Universities UK’s latest UK-wide study of the impact of the higher education sector on the UK economy. The report – produced for Universities UK by the University of Strathclyde – updates earlier studies published in 1997, 2002 and 2006 and confirms the growing economic importance of the sector.

The study found that, in 2007/08:

  • The higher education sector spent some £19.5 billion on goods and services produced in the UK.
  • Through both direct and secondary or multiplier effects this generated over £59 billion of output and over 668,500 full time equivalent jobs throughout the economy. The equivalent figure four years ago was nearly £45 billion (25% increase).
  • The total revenue earned by universities amounted to £23.4 billion (compared with £16.87 billion in 2003/04).
  • Gross export earnings for the higher education sector were estimated to be over £5.3 billion.
  • The personal off-campus expenditure of international students and visitors amounted to £2.3 billion.

Professor Steve Smith, President of Universities UK, said: “These figures show that the higher education sector is one of the UK’s most valuable industries. Our universities are unquestionably an outstanding success story for the economy.

See pp 16-17 regarding a brief discussion of the impact of international student flows into the UK system.

These two reports are interesting examples of contributions to the debate about the meaning and significance of higher education vis a vis relative growth and decline at a global scale, and the value of a key (ostensibly under-recognized) sector of the national (in this case UK) economy.

And third, numbers, viewed from the perspectives of pattern and trend identification, were amply evident in a new Thomson Reuters’ report (CHINA: Research and Collaboration in the New Geography of Science) co-authored by the data base crunchers from Evidence Ltd., a Leeds-based firm and recent Thomson Reuters acquisition. One valuable aspect of this report is that it unpacks the broad trends, and flags key disciplinary and institutional geographies to China’s new geography of science. As someone who worked at the National University of Singapore (NUS) for four years, I can understand why NUS is now China’s No.1 institutional collaborator (see p. 9), though the why issues are not discussed in this type of broad mapping cum PR report for Evidence & Thomson Reuters.

Table4

Shifting tack, two new releases about international double and joint degrees — one (The Graduate International Collaborations Project: A North American Perspective on Joint and Dual Degree Programs) by the North American Council of Graduate Schools (CGS), and one (Joint and Double Degree Programs: An Emerging Model for Transatlantic Exchange) by the International Institute for Education (IIE) and the Freie Universität Berlin — remind us of the emerging desire to craft more focused, intense and ‘deep’ relations between universities versus the current approach which amounts to the promiscuous acquisition of hundreds if not thousands of memoranda of understanding (MoUs).

IIEFUBcoverThe IIE/Freie Universität Berlin book (link here for the table of contents) addresses various aspects of this development process:

The book seeks to provide practical recommendations on key challenges, such as communications, sustainability, curriculum design, and student recruitment. Articles are divided into six thematic sections that assess the development of collaborative degree programs from beginning to end. While the first two sections focus on the theories underpinning transatlantic degree programs and how to secure institutional support and buy-in, the third and fourth sections present perspectives on the beginning stages of a joint or double degree program and the issue of program sustainability. The last two sections focus on profiles of specific transatlantic degree programs and lessons learned from joint and double degree programs in the European context.

It is clear that international joint and double degrees are becoming a genuine phenomenon; so much so that key institutions including the IIE, the CGS, and the EU are all paying close attention to the degrees’ uses, abuses, and efficacy. Thus we should view this new book as an attempt to both promote, but in a manner that examines the many forces that shape the collaborative process across space and between institutions. International partnerships are not simple to create, yet they are being demanded by more and more stakeholders.  Why?  Dissatisfaction that the rhetoric of ‘internationalization’ does not match up to the reality, and there is a ‘deliverables’ problem.

Indeed, we hosted some senior Chinese university officials here in Madison several months ago and they used the term “ghost MoUs”, reflecting their dissatisfaction with filling filing cabinet after filing cabinet with signed MoUs that lead to absolutely nothing. In contrast, engagement via joint and double degrees, for example, or other forms of partnership (e.g., see International partnerships: a legal guide for universities), cannot help but deepen the level of connection between institutions of higher education on a number of levels. It is easy to ignore a MoU, but not so easy to ignore a bilateral scheme with clearly defined deliverables, a timetable for assessment, and a budget.

AlQudsBrandeisThe value of tangible forms of international collaboration was certainly on view when I visited Brandeis University earlier this week.  Brandeis’ partnership with Al-Quds University (in Jerusalem) links “an Arab institution in Jerusalem and a Jewish-sponsored institution in the United States in an exchange designed to foster cultural understanding and provide educational opportunities for students, faculty and staff.”  Projects undertaken via the partnership have included administrative exchanges, academic exchanges, teaching and learning projects, and partnership documentation (an important but often forgotten about activity). The level of commitment to the partnership at Brandeis was genuinely impressive.

In the end, as debates about numbers, rankings, partnerships, MoUs — internationalization more generally — show us, it is only when we start grinding through the details and ‘working at the coal face’ (like Brandeis and Al-Quds seem to be doing), though in a strategic way, can we really shift from rhetoric to reality.

Kris Olds

THE-QS World University Rankings 2009: Year 6 of market making

THE-QSemailWell, an email arrived today and I just could not help myself…I clicked on the THE-QS World University Rankings 2009 links that were provided to see who received what ranking.  In addition, I did a quick Google scan of news outlets and weblogs to see what spins were already underway.

The THE-QS ranking seems to have become the locomotive for the Times Higher Education, a higher education newsletter that is published in the UK once per week.  In contrast to the daily Chronicle of Higher Education, and the daily Inside Higher Ed (both based in the US), the Times Higher Education seems challenged to provide quality content of some depth even on its relatively lax once per week schedule.  I spent four years in the UK in the mid-1990s, and can’t help but note the decline in the quality of the coverage of UK higher education news over the last decade plus.

It seems as if the Times Higher has decided to allocate most of its efforts to promoting the creation and propagation of this global ranking scheme in contrast to providing detailed, analytical, and critical coverage of issues in the UK, let alone in the European Higher Education Area. Six steady years of rankings generate attention, advertising revenue, and enhance some aspects of power and perceived esteem.  But, in the end, where is the Times Higher in analyzing the forces shaping the systems in which all of these universities are embedded, or the complex forces shaping university development strategies?  Rather, we primarily seem to get increasingly thin articles, based on relatively limited original research, heaps of advertising (especially jobs), and now regular build-ups to the annual rankings frenzy. In addition, their partnership with QS Quacquarelli Symonds is leading to new regional rankings; a clear form of market-making at a new unexploited geographic scale.  Of course there are some useful insights generated by rankings, but the rankings attention is arguably making the Times Higher lazier and dare I say, irresponsible, given the increasing significance of higher education to modern societies and economies.

In addition, I continue to be intrigued by how UK-based analysts and institutions seem infatuated with the term “international”, as if it necessarily means better quality than “national”. See, for example, the “international” elements of the current ranking in the figure below:

THEQSscore

Leaving aside my problems with the limited scale of the survey numbers (9,386 academics represent the “world’s” academics?; 3,281 firm representatives represent the “world’s” employers?), and the approach to weighting, why does the proportion of “international” faculty and students necessarily enhance the quality of university life?

Some universities, especially in Australasia and the UK, seek high proportions of international students to compensate for declining levels of government support, and weak levels of extramural funding via research income (which provides streams of income via overhead charges). Thus the higher number of international students may be, in some cases, inversely related to the quality of the university or the health of the public higher education system in which the university is embedded.

In addition, in some contexts, universities are legally required to limit “non-resident” student intake given the nature of the higher education system in place.  But in the metrics used here universities with the incentives and the freedom to let in large numbers of foreign students , for reasons other than the quality of said students, are rewarded with a higher rank.

The discourse of “international” is elevated here, much like it was in the last Research Assessment Exercise (RAE) in the UK, with “international” codeword for higher quality.  But international is just that – international – and it means nothing more than that unless we assess how good they (international students and faculty) are, what they contribute to the educational experience, and what lasting impacts they generate.

In any case, the THE-QS rankings are out.  The relative position of universities in the rankings will be debated about, and used to provide legitimacy for new or previously unrecognized claims. But it’s really the methodology that needs to be unpacked, as well as the nature and logics of the rankers, versus just the institutions that are being ranked.

Kris Olds

CHERPA-network based in Europe wins tender to develop alternative global ranking of universities

rankings 4

Finally the decision on who has won the European Commission’s million euro tender – to develop and test a  global ranking of universities – has been announced.

The successful bid – the CHERPA network (or the Consortium for Higher Education and Research Performance Assessment), is charged with developing a ranking system to overcome what is regarded by the European Commission as the limitations of the Shanghai Jiao Tong and the QS-Times Higher Education schemes. The  final product is to be launched in 2011.

CHERPA is comprised of a consortium of leading institutions in the field within Europe; all have been developing and offering rather different approaches to ranking over the past few years (see our earlier stories here, here and  here for some of the potential contenders):

Will this new European Commission driven initiative set the proverbial European cat amongst the Transatlantic alliance pigeons?  rankings 1

As we have noted in earlier commentary on university rankings, the different approaches tip the rankings playing field in the direction of different interests. Much to the chagrin of the continental Europeans, the high status US universities do well on the Shanghai Jiao Tong University Ranking, whilst Britain’s QS-Times Higher Education tends to see UK universities feature more prominently.

CHERPA will develop a design that follows the so called ‘Berlin Principles on the ranking of higher education institutions‘. These principles stress the need to take into account the linguistic, cultural and historical contexts of the educational systems into account [this fact is something of an irony for those watchers following UK higher education developments last week following a Cabinet reshuffle – where reference to ‘universities’ in the departmental name was dropped.  The two year old Department for Innovation, Universities and Skills has now been abandoned in favor of a mega-Department for Business, Innovation and Skills! (read more here)].

According to one of the Consortium members website –  CHE:

The basic approach underlying the project is to compare only institutions which are similar and comparable in terms of their missions and structures. Therefore the project is closely linked to the idea of a European classification (“mapping”) of higher education institutions developed by CHEPS. The feasibility study will include focused rankings on particular aspects of higher education at the institutional level (e.g., internationalization and regional engagement) on the one hand, and two field-based rankings for business and engineering programmes on the other hand.

The field-based rankings will each focus on a particular type of institution and will develop and test a set of indicators appropriate to these institutions. The rankings will be multi-dimensional and will – like the CHE ranking – use a grouping approach rather than simplistic league tables. In contrast to existing global rankings, the design will compare not only the research performance of institutions but will include teaching & learning as well as other aspects of university performance.

The different rankings will be targeted at different stakeholders: They will support decision-making in universities and especially better informed study decisions by students. Rankings that create transparency for prospective students should promote access to higher education.

The University World News, in their report out today on the announcement, notes:

Testing will take place next year and must include a representative sample of at least 150 institutions with different missions in and outside Europe. At least six institutions should be drawn from the six large EU member states, one to three from the other 21, plus 25 institutions in North America, 25 in Asia and three in Australia.

There are multiple logics and politics at play here. On the one hand, a European ranking system may well give the European Commission more HE  governance capacity across Europe, strengthening its steering over national systems in areas like ‘internationalization’ and ‘regional engagement’ – two key areas that have been identified for work to be undertaken by CHERPA.

On the other hand, this new European ranking  system — when realized — might also appeal to countries in Latin America, Africa and Asia who currently do not feature in any significant way in the two dominant systems. Like the Bologna Process, the CHERPA ranking system might well find itself generating ‘echoes’ around the globe.

Or, will regions around the world prefer to develop and promote their own niche ranking systems, elements of which were evident in the QS.com Asia ranking that was recently launched.  Whatever the outcome, as we have observed before, there is a thickening industry with profits to be had on this aspect of the emerging global higher education landscape.

Susan Robertson

CRELL: critiquing global university rankings and their methodologies

This guest entry has been kindly prepared for us by Beatrice d’Hombres and Michaela Saisana of the EU-funded Centre for Research on Lifelong Learning (CRELL) and Joint Research Centre. This entry is part of a series on the processes and politics of global university rankings (see herehere, here and here).

beatriceSince 2006, Beatrice d’Hombres has been working in the Unit of Econometrics and Statistics of the Joint Research Centre of  the European Commission. She is part of the Centre for Research on Lifelong Learning. Beatrice is an economist who completed a PhD at the University of Auvergne (France). She has a particular expertise in education economics and applied econometrics.

michaela

Michaela Saisana works for the Joint Research Centre (JRC) of the European Commission at the Unit of Econometrics and Applied Statistics. She has a PhD in Chemical Engineering and in 2004 she won the European Commission – JRC Young Scientist Prize in Statistics and Econometrics for her contribution on the robustness assessment of composite indicators and her work on sensitivity analysis.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The expansion of the access to higher education, the growing mobility of students, the need for economic rationale behind the allocation of public funds, together with the demand for higher accountability and transparency, have all contributed to raise the need for comparing university quality across countries.

The recognition of this fact has also been greatly stirred  by the publication, since 2003, of the ‘Shanghai Jiao Tong University Academic Ranking of World Universities’ (henceforth SJTU), which measures university research performance across the world. The SJTU ranking tends to reinforce the evidence that the US is well ahead of Europe in terms of cutting-edge university research.

Its rival is the ranking computed annually, since 2004, by the Times Higher Education Supplement (henceforth THES). Both these rankings are now receiving worldwide attention and constitute an occasion for national governments to comment on the relative performances of their national universities.

In France, for example, the publication of the SJTU is always associated with a surge of articles in newspapers which either bemoan  the poor performance of French universities or denounce the inadequacy of the SJTU ranking to properly assess the attractiveness of the fragmented French higher education institutions landscape (see Les Echos, 7 August 2008).

Whether the intention of the rankers or not, university rankings have followed a destiny of their own and are used by national policy makers to stimulate debates about national university systems and ultimately can lead to specific education policies orientations.

At the same time, however, these rankings are subject to a plethora of criticism. They outline that the chosen indicators are mainly based on research performance with no attempt to take into account the others missions of universities (in particular teaching), and are biased towards large, English-speaking and hard-science institutions. Whilst the limitations of the indicators underlying the THES or the SJTU rankings have been extensively discussed in the relevant literature, there has been no attempt so far to examine in depth the volatility of the university ranks to the methodological assumptions made in compiling the rankings.

crell3The purpose of the JRC/Centre for Research on Lifelong Learning (CRELL) report is to fill in this gap by quantifying how much university rankings depend on the methodology and to reveal whether the Shanghai ranking serves the purposes it is used for, and if its immediate European alternative, the British THES, can do better.

To that end, we carry out a thorough uncertainty and sensitivity analysis of the 2007 SJTU and THES rankings under a plurality of scenarios in which we activate simultaneously different sources of uncertainty. The sources cover a wide spectrum of methodological assumptions (set of selected indicators, weighting scheme, and aggregation method).

This implies that we deviate from the classic approach – also taken in the two university ranking systems – to build a composite indicator by a simple weighted summation of indicators. Subsequently, a frequency matrix of the university ranks is calculated across the different simulations. Such a multi-modeling approach and the presentation of the frequency matrix, rather than the single ranks, allows one to deal with the criticism, often made to league tables and rankings systems ,that ranks are presented as if they were calculated under conditions of certainty while this is rarely the case.  crell

The main findings of the report are the following. Both rankings are only robust in the identification of the top 15 performers on either side of the Atlantic, but unreliable on the exact ordering of all other institutes. And, even when combining all twelve indicators in a single framework, the space of the inference is too wide for about 50 universities of the 88 universities we studied and thus no meaningful rank can be estimated for those universities. Finally, the JRC report suggests that THES and SJTU rankings should be improved along two main directions:

  • first, the compilation of university rankings should always be accompanied by a robustness analysis based on a multi-modeling approach. We believe that this could constitute an additional recommendation to be added to the already 16 existing Berlin Principles;
  • second, it is necessary to revisit the set of indicators, so as to enrich it with other dimensions that are crucial to assessing university performance and which are currently missing.

Beatrice d’Hombres  and Michaela Saisana

Ranking – in a different (CHE) way?

uwe_brandenburg_2006-005nl GlobalHigherEd has been profiling a series of entries on university rankings as an emerging industry and technology of governance. This entry has been kindly prepared for us by Uwe Brandenburg. Since 2006 Uwe has been project manager at the Centre for Higher Education Development (CHE) and CHE Consult, a think tank and consultancy focusing on higher education reform.  Uwe has an MA in Islamic Studies, Politics and Spanish from the University of Münster (Germany),  and an MscEcon in Politics from the University of Wales at Swansea.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Talking about rankings usually means talking about league tables. Values are calculated based on weighed indicators which are then turned into a figure, added and formed into an overall value, often with the index of 100 for the best institution counting down. Moreover, in many cases entire universities are compared and the scope of indicators is somewhat limited. We at the Centre for Higher Education Development (CHE) are highly sceptical about this approach. For more than 10 years we have been running our own ranking system which is so different to the point that  some experts  have argued that it might not be a ranking at all which is actually not true. Just because the Toyota Prius is using a very different technology to produce energy does not exclude it from the species of automobiles. What are then the differences?

uwe1

Firstly, we do not believe in the ranking of entire HEIs. This is mainly due to the fact that such a ranking necessarily blurs the differences within an institution. For us, the target group has to be the starting point of any ranking exercise. Thus, one can fairly argue that it does not help a student looking for a physics department to learn that university A is average when in fact the physics department is outstanding, the sociology appalling and the rest is mediocre. It is the old problem of the man with his head in the fire and the feet in the freezer. A doctor would diagnose that the man is in a serious condition while a statistician might claim that over all he is doing fine.

So instead we always rank on the subject level. And given the results of the first ExcellenceRanking which focused on natural sciences and mathematics in European universities with a clear target group of prospective Master and PhD students, we think that this proves the point;  only 4 institutions excelled in all four subjects; another four in three; while most excelled in only one subject. And this was in a quite closely related field.

uwe2

Secondly, we do not create values by weighing indicators and then calculating an overall value. Why is that? The main reason is that any weight is necessarily arbitrary, or in other words political. The person weighing decides which weight to give. By doing so, you pre-decide the outcome of any ranking. You make it even worse when you then add the different values together and create one overall value because this blurs differences between individual indicators.

Say a discipline is publishing a lot but nobody reads it. If you give publications a weight of 2 and citations a weight of one, it will look like the department is very strong. If you do it the other way, it will look pretty weak. If you add the values you make it even worse because you blur the difference between both performances. And those two indicators are even rather closely related. If you summarize results from research indicators with reputation indicators, you make things entirely irrelevant.

Instead, we let the indicator results stand for their own and let the user decide what is important for his or her personal decision-making process. e.g., in the classical ranking we allow the users to create “my ranking” so they can choose the indicators they want to look at and in which order.

Thirdly, we strongly object to the idea of league tables. If the values which create the table are technically arbitrary (because of the weighing and the accumulation), the league table positions create the even worse illusion of distinctive and decisive differences between places. They then bring alive the impression of an existing difference in quality (no time or space here to argue the tricky issue of what quality might be) which is measurable to the percentage point. In other words, that there is a qualitative and objectively recognizable measurable difference between place number 12 and 15. Which is normally not the case.

Moreover, small mathematical differences can create huge differences in league table positions. Take the THES QS: even in the subject cluster SocSci you find a mere difference of 4.3 points on a 100 point scale between league rank 33 and 43. In the overall university rankings, it is a meager 6.7 points difference between rank 21 and 41 going down to a slim 15.3 points difference between rank 100 and 200. That is to say, the league table positions of HEIs might differ by much less than a single point or less than 1% (of an arbitrarily set figure). Thus, it tells us much less than the league position suggests.

Our approach, therefore, is to create groups (top, middle, bottom) which are referring to the performance of each HEI relative to the other HEIs.

uwe3

This means our rankings are not as easily read as the others. However,  we strongly believe in the cleverness of the users. Moreover, we try to communicate at every possible level that every ranking (and therefore also ours) is based on indicators which are chosen by the ranking institution. Consequently, the results of the respective ranking can tell you something about how an HEI performs in the framework of what the ranker thinks interesting, necessary, relevant, etc. Rankings therefore NEVER tell you who is the best but maybe (depending on the methodology) who is performing best (or in our cases better than average) in aspects considered relevant by the ranker.

A small, but highly relevant aspect might be added here. Rankings (in the HE system as well as in other areas of life) might suggest that a result in an indicator proves that an institution is performing well in the area measured by the indicator. Well it does not. All an indicator does is hint at the fact that given the data is robust and relevant, the results give some idea of how close the gap is between the performance of the institution and the best possible result (if such a benchmark exists). The important word is “hint” because “indicare” – from which the word “indicator” derives – means exactly this: a hint, not a proof. And in the case of many quantitative indicators, the “best” or “better” is again a political decision if the indicator stands alone (e.g. are more international students better? Are more exchange agreements better?).

This is why we argue that rankings have a useful function in terms of creating transparency if they are properly used, i.e. if the users are aware of the limitations, the purpose, the target groups and the agenda of the ranking organization and if the ranking is understood as one instrument among various others fit to make whatever decision related to an HEI (study, cooperation, funding, etc.).

Finally, modesty is maybe what a ranker should have in abundance. Running the excellence ranking in three different phases (initial in 2007, second phase with new subjects right now, repetition of natural sciences just starting) I am aware of certainly one thing. However strongly we aim at being sound and coherent, and however intensely we re-evaluate our efforts, there is always the chance of missing something; of not picking an excellent institution. For the world of ranking, Einstein’s conclusion holds a lot of truth:

Not everything that can be counted, counts and not everything that counts can be counted.

For further aspects see:
http://www.che-ranking.de/cms/?getObject=47&getLang=de
http://www.che-ranking.de/cms/?getObject=44&getLang=de
Federkeil, Gero, Rankings and Quality Assurance in Higher Education, in: Higher Education in Europe, 33, (2008), S. 209-218
Federkeil, Gero, Ranking Higher Education Institutions – A European Perspective., in: Evaluation in Higher Education, 2, (2008), S. 35 – 52
Other researchers specialising in this (and often referring to our method) are e.g. Alex Usher, Marijk van der Wende or Simon Marginson.

Uwe Brandenburg

Regional content expansion in Web of Science®: opening borders to exploration

jim-testaEditor’s note: this guest entry was written by James Testa, Senior Director, Editorial Development & Publisher Relations, Thomson Reuters. It was originally published on an internal Thomson Reuters website. James Testa (pictured to the left) joined Thomson Reuters (then ISI) in 1983. From 1983 through 1996 he managed the Publisher Relations Department and was directly responsible for building and maintaining working relations with the over three thousand international scholarly publishers whose journals are indexed by Thomson Reuters.  In 1996 Mr. Testa was appointed the Director of Editorial Development. In this position he directed a staff of information professionals in the evaluation and selection of journals and other publication formats for coverage in the various Thomson Reuters products. In 2007 he was named Senior Director, Editorial Development & Publisher Relations.  In this combined role he continues to build content for Thomson Reuters products and work to increase efficiency in communication with the international STM publishing community. He is a member of the American Society of Information Science and Technology (ASIST) and has spoken frequently on behalf of Thomson Reuters in the Asia Pacific region, South America, and Europe.

Our thanks also go to Susan Besaw of Thomson Reuters for facilitating access to the essay. This guest entry ties in to one of our earlier entries on this topic (‘Thomson Reuters, China, and ‘regional’ journals: of gifts and knowledge production’), as well as a fascinating new entry (‘The Canadian Center of Science and Education and Academic Nationalism’) posted on the consistently excellent Scott Sommers’ Taiwan Blog.

~~~~~~~~~~~~~~~~~~~~~

thomsonreuterslogoThomson Reuters extends the power of its Journal Selection Process by focusing on the world’s best regional journals. The goal of this initiative is to enrich the collection of important and influential international journals now covered in Web of Science with a number of superbly produced journals whose content is of specific regional importance.

Since its inception nearly fifty years ago by Eugene Garfield, PhD, the primary goal of the Journal Selection Process has been to identify those journals which formed the core literature of the sciences, social sciences, and arts & humanities. These journals publish the bulk of scholarly research, receive the most citations from the surrounding literature, and have the highest citation impact of all journals published today. The journals selected for the Web of Science are, in essence, the scholarly publications that meet the broadest research needs of the international community of researchers. They have been selected on the basis of their high publishing standards, their editorial content, the international diversity of their contributing authors and editorial board members, and on their relative citation frequency and impact. International journals selected for the Web of Science define the very highest standards in the world of scholarly publishing.

In recent years, however, the user community of the Web of Science has expanded gradually from what was once a concentration of major universities and research facilities in the United States and Western Europe to an internationally diverse group including virtually all major universities and research centers in every region of the world. Where once the Thomson Reuters sales force was concentrated in Philadelphia and London, local staff are now committed to the service of customers at offices in Japan, Singapore, Australia, Brazil, China, France, Germany, Taiwan, India, and South Korea.

webofknowledgeAs the global distribution of Web of Science expands into virtually every region on earth, the importance of regional scholarship to our emerging regional user community also grows. Our approach to regional scholarship effectively extends the scope of the Thomson Reuters Journal Selection Process beyond the collection of the great international journal literature: it now moves into the realm of the regional journal literature. Its renewed purpose is to identify, evaluate, and select those scholarly journals that target a regional rather than an international audience. Bringing the best of these regional titles into the Web of Science will illuminate regional studies that would otherwise not have been visible to the broader international community of researchers.

In the Fall of 2006, the Editorial Development Department of Thomson Reuters began this monumental task. Under the direction of Maureen Handel, Manager of Journal Selection, the team of subject editors compiled a list of over 10,000 scholarly publications representing all areas of science, social science, the arts, and humanities. Over the next twelve months the team was able to select 700 regional journals for coverage in the Web of Science.

The Web of Science Regional Journal Profile

These regional journals are typically published outside the US or UK. Their content often centers on topics of regional interest or that are presented with a regional perspective. Authors may be largely from the region rather than an internationally diverse group. Bibliographic information is in English with the exception of some arts and humanities publications that are by definition in native language (e.g. literature studies). Cited references must be in the Roman alphabet. All journals selected are publishing on time and are formally peer reviewed. Citation analysis may be applied but the real importance of the regional journal is measured by the specificity of its content rather than its citation impact.

Subject Areas and Their Characteristics

These first 700 journals selected in 2007 included 161 Social Science titles, 148 Clinical Medicine titles, 108 Agriculture/Biology/Environmental Science titles, 95 Physics/Chemistry/Earth Science titles, 89 Engineering/Computing/Technology titles, 61 Arts/Humanities titles, and 38 Life Sciences titles. The editors’ exploration of each subject area surfaced hidden treasure.

Social Sciences:
The European Union and Asia Pacific regions yielded over 140 social science titles. Subject areas such as business, economics, management, and education have been enriched with regional coverage. Several fine law journals have been selected and will provide balance in an area normally dominated by US journals. Because of the characteristically regional nature of many studies in the social sciences, this area will provide a rich source of coverage that would otherwise not be available to the broader international community.

Clinical Medicine:
Several regional journals dealing with General Medicine, Cardiology, and Orthopedics have been selected. Latin America, Asia Pacific, and European Union are all well represented here. Research in Surgery is a growing area in regional journals. Robotic and other novel surgical technology is no longer limited to the developed nations but now originates in China and India as well and has potential use internationally.

The spread of diseases such as bird flu and SARS eastward and westward from Southeast Asia is a high interest topic regionally and internationally. In some cases host countries develop defensive practices and, if enough time elapses, vaccines. Regional studies on these critical subjects will now be available in Web of Science.

Agriculture/Biology/Environmental Sciences:
Many of the selected regional titles in this area include new or endemic taxa of interest globally. Likewise regional agriculture or environmental issues are now known to result in global consequences. Many titles are devoted to niche topics such as polar/tundra environment issues, or tropical agronomy. Desertification has heightened the value of literature from central Asian countries. Iranian journals report voluminously on the use of native, desert tolerant plants and animals that may soon be in demand by desertification threatened countries.

Physics/Chemistry/Earth Sciences:
Regional journals focused on various aspects of Earth Science are now available in Web of Science. These include titles focused on geology, geography, oceanography, meteorology, climatology, paleontology, remote sensing, and geomorphology. Again, the inherently regional nature of these studies provides a unique view of the subject and brings forward studies heretofore hidden.

Engineering/Computing/Technology:
Engineering is a subject of global interest. Regional Journals in this area typically present subject matter as researched by regional authors for their local audience. Civil and Mechanical Engineering studies are well represented, providing solutions to engineering problems arising from local geological, social, environmental, climatological, or economic factors.

Arts & Humanities:
The already deep coverage of Arts & Humanities in Web of Science is now enhanced by additional regional publications focused on such subjects as History, Linguistics, Archaeology, and Religion. Journals from countries in the European Union, Latin American, Africa, and Asia Pacific regions are included.

Life Sciences:
Life Sciences subject areas lending themselves to regional studies include parasitology, micro-biology, and pharmacology. A specific example of valuable regional activity is stem cell research. The illegality of stem cell studies in an increasing number of developed countries has moved the research to various Asian countries where it is of great interest inside and outside of the region.

Conclusion

The primary mission of the Journal Selection Process is to identify, evaluate and select the top tier international and regional journals for coverage in the Web of Science. These are the journals that have the greatest potential to advance research on a given topic. In the pursuit of this goal Thomson Reuters has partnered with many publishers and societies worldwide in the development of their publications. As an important by-product of the steady application of the Journal Selection Process, Thomson Reuters is actively involved in raising the level of research communication as presented in journals. The objective standards described in the Journal Selection Process will now be focused directly on a new and expansive body of literature. Our hope, therefore, is not only to enrich the editorial content of Web of Science, but also to expand relations with the world’s primary publishers in the achievement of our mutual goal: more effective communication of scientific results to the communities we serve.

James Testa

Author’s note: This essay was compiled by James Testa, Senior Director, Editorial Development & Publisher Relations. Special thanks to Editorial Development staff members Maureen Handel, Mariana Boletta, Rodney Chonka, Lauren Gala, Anne Marie Hinds, Katherine Junkins-Baumgartner, Chang Liu, Kathleen Michael, Luisa Rojo, and, Nancy Thornton for their critical reading and comments.