Last week, EUA published a new report on rankings ‘Rankings in Institutional Strategies and Processes: Impact or Illusion?‘ (RISP) where the project examines in detail how rankings are used for institutional development across Europe. This report directly follows up on two earlier EUA reports on rankings that had primary focus on analyzing the methodology of rankings. Earlier this year, NIFU also published a report on the Nordic countries, where focus was on a comprehensive deconstruction of the rankings to identify what assures success, and to examine the impact of rankings on the leadership of research intensive universities in the Nordic region.
Data for the EUA report was gathered in various forms. An online survey was sent out to all EUA members (about 850). The survey yielded responses from 171 institutions in 39 countries, with a broad coverage of various European countries. 90% of the respondents came from instituions who are part of a ranking. Folloing up on the survey, a total of 48 meetings were conducted through six site visits to understand in more detail how instituions work with rankings, and a roundtable was organised with 25 participants from 18 European countries to create an arena for peer learning and sharing of experiences.
The main conclusion from this project is that rankings indeed do have an effect on institutional behaviour, but that this effect varies. 60% of those who answered in the survey replied that rankings are used in their institutional strategies – but the specific kind of use varied from examining certain indicators to using them in a comprehensive manner. Furthermore, it is highlighted that as many as 39% report that the results of rankings “to inform strategic, organisational, managerial or academic actions, and another third of respondents were planning to do so”. Unsurprisingly, rankings were widely used in marketing, but the respondents had also reported use in “the revision of university policies, the prioritisation of some research areas, recruitment criteria, resource allocation, revision of formal procedures, and the creation of departments or programme”.
Examining the Nordic region in detail appears to show that the impact of rankings is more modest in that region in comparison to the wider European landscape. The study conducted by NIFU in the Nordic countries indicated that while one can find examples of where rankings are used as specific targets in institutional strategies, there is a general sentiment that rankings are a poor measurement of quality, and that it is quality that really matters. While there is an understanding that “rankings are here to stay”, their importance is not necessarily highlighted. There is acknowledgement that Nordic countries are in many cases the periphery, and many of the institutions are comparatively young. Consequently, scoring high on reputation rankings can be difficult. In one of the cases where rankings results were even stated as a strategic aim in the strategy, it was indicated that this was due to the university board who had set this goal. As such, rankings had not really entered the core domain of institutional behaviour, and the real uses for them were primarily found by communications department and when evaluating inquiries from unknown institutions.
The two reports indicate that rankings have not lost their prominence of catching the attention in the public domain and that higher education instituions are conscious about these developments. At the same time, the reports also suggest that there is likely to be large variety in terms of the responses on institutional level. The NIFU report on the Nordics also shows that the actual strategic use of rankings might be more modest in some regions. The conclusion from the RISP report is that it is the mission of the universities that matters and that rankings should by no means form a basis for resource allocation, and here both of the reports point in a rather similar direction.