


The Ranking Game: Global University Rankings and HE Policies
By Julia Reis, Knowledge Manager at Edu:Manufaktura
The emergence of global university rankings has strongly contributed to the increasing struggle of universities to be seen as world-class institutions and has had an effect on both institutional strategies and national policies. This article analyzes the factors that led to the emergence of global rankings, discusses their differing methodologies and examines their impact on both institutional and national policies. It also discusses the advantages and disadvantages of rankings and will come to a general verdict.
The Emergence of Rankings
The first rankings to emerge where mostly of a national scope, mainly supporting prospective students in choosing their universities. Examples include the Times Good University Guide published since 1992 or the Spiegel Uni-Rankings published since the late 1980s. University rankings seem to have struck a nerve: many universities have instantaneously become obsessed with them.
A number of reasons for the gradual increase in the importance of international comparisons of universities can be put forward:
-
Globalization of higher education
-
The global increase of student numbers
-
The general trend towards performance measurement and benchmarking in higher education.
Rankings can be a useful tool for universities to demonstrate their excellence and provide information to potential applicants.
What’s out there?
Rankings have been published by a range of for- and non-profit institutions. They differ significantly in the measurements they use and the ways in which they are constructed. An overview of some of the most prominent rankings and the main indicators they use can be found in Table 1.
Table 1: Sample rankings and their methodology
U.S. News & World Report America’s Best Colleges
Compiled since: 1983
Main indicators used: Quality of incoming students, average faculty salary, student staff ratio, acceptance rate, per student spending, class size, alumni giving rate, graduation rate, reputation
Other characteristics: National
The Times Good University Guide
Compiled since: 1992
Main indicators used: Quality of incoming students, student/staff ratio, research assessment, facilities spending, teaching assessment, job prospects
Other characteristics: National, ranking whole institutions and academic programs
CHE- Hochschulranking
Compiled since: 1998
Main indicators used: Graduate placement, facilities, research, International Orientation, teaching quality, study contents
Other characteristics: National; ranking by subject areas; rating rather than ranking; large number of indicators used
Times Higher Education World University Ranking (THE)
Compiled since: 2003
Main indicators used: Teaching (reputation, survey), research (volume, income, reputation), international outlook, citations
Other characteristics: International
QS World University Ranking (by Subject)
Compiled since: 2004
Main indicators used: Citations per staff member or published paper, position of universities in domestic rankings, reputation survey performance
Other characteristics: International, ranking whole institutions and subject areas
Shanghai Jiao Tong University “Academic Ranking of World Universities”
Compiled since: 2008
Main indicators used: Number of Nobel Prizes by alumni and faculty; number of articles in Nature or Science; Citation Indices
Other characteristics: International
U-Multirank
Compiled since: 2014
Main indicators used: Teaching quality (student satisfaction), research, knowledge transfer, international orientation, regional engagement
Other characteristics: International; ranking whole universities and subject areas; EU- funded; user-driven; including a large number of indicators
Sources: Dill & Soo 2005; Rauhvargers 2013; CWTS Leiden Ranking 2013; CHE-Ranking 2014; Aghion et al. 2010
As the table demonstrates, some rankings are of national scope, while others aim to rank universities on a global scale. Some choose to compare universities as a whole; others compile rankings with regard to specific subjects.
Furthermore, most rankings are indeed “rankings” in the strict sense of the term suggesting that “university a” is called better than “university b”. Other rankings do not actually “rank” institutions in this way, but rather rate them on a number of measures without clearly mentioning the exact position of a university with regard to its competitors. A new trend in the presentation of online rankings is the possibility to customize rankings according to the wishes of the consumer.
Rankings have had without a doubt a major impact on HEI by “injecting a new competitive dynamic into higher education” (Hazelkorn, 2009). Even those institutions, which are skeptical about their methodological validity grudgingly, take their results into consideration when devising strategies for the future of their institution.
Many HEI have embraced rankings as useful tools to further their marketing by using their ranking positions as “selling points” of their university. As a result of the importance of an institution’s position in the rankings, many universities have chosen to concentrate on improving on those indicators, which can be changed by their actions most easily. These include for example the goal to increase the number of English-language articles published by their faculty. Others have even resorted to outright manipulation for example by misreporting admission statistics to ranking institutes.
Positive aspects of rankings
One of the most important benefits of rankings is that they offer prospective students a useful guideline to base their study decisions on. Even if ranking positions should not and still are not the main criterion used by students, particularly top students make use of rankings to narrow down their potential universities to a certain number before looking at them in more detail.
This positive aspect of rankings is further enhanced by subject instead of general institutional rankings and new ranking features which make it possible for consumers to individualize the criteria used to rank institutions according to their preferences.
Furthermore, rankings certainly have the potential to increase transparency of university performance making comparisons between institutions possible and improving their public accountability. Thus, they ideally motivate HEI to improve their research and teaching performance that would benefit students and national economies alike.
Negative aspects of rankings
Perhaps the most serious criticism is aimed at their lack of methodological validity. Most rankings suffer from a number of methodological flaws including the failure to report confidence intervals for their indicators, a lack of detailed information on the way in which they precisely weight and aggregate the indicators used and the validity of the indicators themselves.
It is often pointed out that rankings focus much more on research than on teaching, while the latter might be of high relevance for prospective students.
Furthermore, the use of surveys to measure an institution’s reputation is highly problematic as such judgments often strongly depend on previous ranking positions of universities thus rendering rankings less meaningful as they produce “self-perpetuating” results.
Institutions outside the Anglophone world often raise the issue of an inherent advantage of English-speaking institutions as they fare much better when it comes to citation indices, which are a very important measure in most rankings.
Similarly, “the arts, humanities and to a large extent the social sciences remain underrepresented in rankings” due to the fact that bibliometric indicators often focus on journal publications rather than books. A way to deal with this problem is the increased compilation of subject rankings, which place institutions on a more level playing field.
Additionally, rankings can lead universities to focus on activities which improve their position in rankings even if the resources could be put to much more effective use. Some commentators argue that rankings have actually led to excessive spending in areas, which do not add real benefits to the students’ educational experience. What is more, this spending may have been one of the factors leading to the upsurge of tuition fees in countries such as the U.S..
To summarize...
In sum, rankings can be seen as valuable tools for prospective students, policy makers and the wider public to get an idea about the relative position of HEI in the world of higher education. As such, they are “here to stay” despite serious flaws and some negative effects they can have.
What is important is for the publishing institutions to take criticisms into account when devising their methodologies in order to continuously improve the validity and meaningfulness of their rankings. Recent developments such as the fact that some providers have themselves pointed to “biases or flaws” included in their rankings and the introduction of a ranking audit by the International Ranking Expert Group (IREG) seem to give reason for hope for the future of global university rankings.

