Global university rankings and their impact are a phenomenon of the 21st century. The effects of the launch of the first global university ranking in 2003, the Academic Ranking of World Universities (ARWU, or Shanghai ranking), were almost immediately felt. The subsequent surge in new rankings, as well as the more recent increase in specialised rankings, indicate continuing interest – despite criticism.
This criticism concerns the relative ease with which some indicators can be manipulated (such as the student-staff ratio). Another challenge is the selection processes resulting in a final list of indicators. These are often based on subjective biases based on historical and cultural norms and their application typically favours historically renowned, research-intensive universities in the Anglo-Saxon and North American regions with a strong track record in the natural sciences. As a result, many equally, but differently, fine institutions remain overlooked.
Pros and cons
Nevertheless, the allure of rankings seems difficult to elude. A 2014 European University Association (EUA) study on rankings in institutional strategies and processes showed that a majority of institutions thought rankings were generally affecting their institution’s reputation in a positive way, indicating that their biggest benefit is the visibility they offer to institutions.
The same study found that universities pay close attention to ranking results, even when they are aware and critical of the methodological and conceptual shortcomings of rankings.
However, there are examples of misuse of rankings and a downward slip or lack of appearance in a university ranking may have serious consequences. A recent EUA study exploring the use of indicators in European higher education found that the outcomes of national and international rankings factor into some national or system-level funding formulae.
The same report showed that there is a general lack of indicators used in rankings that are concerned specifically with learning and teaching. In light of this finding, it is worrying that a 2016 report by the Lisbon Recognition Convention Committee showed that it was not an entirely uncommon practice to base recognition decisions, when it came to qualifications obtained at a foreign institution, on the institution’s position in one or several rankings.
Academic recognition in the European Higher Education Area is regulated through the Lisbon Recognition Convention. Hence the practice of basing recognition decisions partially on ranking results is both superfluous and misguided.
University rankings are diversifying
In the past decade, a multitude of specialised and non-traditional rankings were rolled out. In 2014, the first U-Multirank ranking was published. Despite its name, U-Multirank is not a ranking in the traditional sense but a “multidimensional, user-driven approach”, which allows comparisons of performance in five areas of a university’s activities and at the level of specific study programmes.
In addition, several highly specialised thematic rankings have been developed, such as the sustainability-focused UI GreenMetric World University Rankings, the Times Higher Education (THE) Europe Teaching Rankings and the THE Impact Rankings, which aim to evaluate to what extent institutions are supporting the achievement of the United Nations Sustainable Development Goals. Concurrently, the feasibility of developing an equity-based university ranking is being explored.
One interesting outcome is that institutions that tend to be in a less advantaged position in the more established, non-specialised international university rankings gain visibility in specialised rankings.
In the 2020 THE Impact Rankings, for example, some of the universities that are usually at the top of more established, well-known rankings do not appear at all, whereas the highest-scoring university in this ranking, the University of Auckland, ranks 179 in the same year’s edition of the THE World University Rankings.
This is, generally speaking, a positive development since it allows universities that are usually not in the league tables to step into the spotlight. It might also serve as an incentive for institutions’ stakeholders, most importantly students, to explore which university shares their values and interests.
Still suffering from well-known issues
Even though specialised rankings have the potential to offer more visibility to less prominent universities, they still employ the same faulty methodologies and indicators that can lead to distorted results.
U-Multirank’s institutional ranking, for example, measures an institution’s ‘international orientation’ through a set of six indicators, which reduce internationalised education to language and mobility or cross-border issues, such as ‘foreign language bachelor programmes’, ‘student mobility’ or ‘international doctorate degrees’.
Yet the internationality of an institution’s orientation is (or should be) defined through much more than cross-border activities and the use of a foreign language and should, for example, also cover course content, teacher training, extra-curricular activities and support services, as was highlighted by a recent report by the EUA Learning and Teaching Thematic Peer Group, Internationalisation in Learning and Teaching.
Hence, to evaluate an institution’s international orientation, an altogether more complex set of indicators would be needed. Moreover, many indicators are subject to external factors too. This is particularly the case for mobility-related indicators, whose results are likely to be impacted for years to come by the current coronavirus pandemic and resulting lockdowns.
All of these issues imply that rankings and their results need to be handled with care and should not be used hastily for any high-impact decision-making processes.
As highlighted in the recently published EUA report, there is a limit to what the indicators used by rankings can tell.
Thus, if they are drawn upon in a debate about quality, they should always be complemented by information obtained through other tools that aim to evaluate performance in higher education. This way, the debate would be more nuanced and thus better able to reflect the complexities of the higher education landscape.
Helene Peterbauer is policy and project officer at the European University Association (EUA). She is specialised in learning and teaching. This article is also published on the EUA’s Expert Voices blog.