Guest blogger: From performance to conformance: The ‘coercive’ effect of performance-based governance systems
Dr Peter Woelert is a Research Fellow in the Melbourne Graduate School of Education at the University of Melbourne. He has training in sociology and philosophy. His research interests include current trends in university and research policy & governance, organizational change within universities, and organizational forms of university autonomy.
Over recent decades, many governments have sought to comprehensively reform the system-level policy and governance arrangements for their public universities. One central element of this reform has been the strengthening of performance-based funding mechanisms, with a growing proportion of public funds being distributed to universities according to the results (or ‘outputs’) achieved by them.
A striking case in point for this trend is the Australian higher education system. Since the late 1980s, various Australian governments have developed a funding system for their universities that places paramount emphasis on the performance-based allocation of funds. Most far-reaching have been the changes to the funding of university-based research activities.
In Australia, all recurrent governmental research funding – as compared to the competitive research grant funding awarded by the two Australian research councils – is allocated to universities on the basis of an indicator-based funding formula. The key performance indicators in this regard are the number of publications, external research income, and the number of students completing research degrees such as a doctorate, and which are applied equally to all Australian universities. While indicator-based public research funds only form a relatively small proportion of the annual income of Australian universities (in between 5-10%), they are taken extremely seriously at the university level, and have had a lasting impact on institutional governance systems.
Most notably in this regard is that within the Australian university system, the reshaping of system-level funding arrangements has triggered a vertical adaptation process as a result of which various organizational levels of the university almost identically replicate the performance criteria that are applied to it from above.
This begins with the executive center of the university, which usually applies the same or nearly the same performance criteria across the university’s organizational divisions that the Australian government uses for the performance-based allocation of research funds across the entire university sector.
This intra-institutional adaption of governmental performance measures is then repeated on the level of individual faculties (see Gläser, Lange, Laudel & Schimank 2010). In the majority of cases, these tend to internally replicate the performance criteria by which they are funded, to determine in turn the allocation of funds to schools and departments.
Finally, Australian universities have established comprehensive performance-based governance systems to closely monitor the work of individual academics against performance criteria which align closely with those used at the top of the governance hierarchy.
In the majority of institutions, these evaluations of individual performances continue to be conducted on the departmental level or faculty level. This said, the central administration usually sets the broader criteria used in individual performance evaluations, and in some notable instances, has taken direct control of the associated evaluation mechanisms.
One example of this is a research performance ‘index’ more recently established at one of the leading research universities in the country. This research performance index comprises in the main the same three research metrics that are used in the governmental research funding formula: The number of publications in recognized outlets, external research grant income, and the number of supervised students undertaking and having completed research degrees.
This individual performance index is apparently the result of a concerted effort to directly model individual performance evaluations upon system-level research funding mechanisms. Yet in mechanically applying those indicators used by government across the university, this index also violates basic ‘best-practice’ principles of research evaluation recently outlined in the ‘Leiden Manifesto’ (see Hicks, Wouters, Waltman, De Rijcke & Rafols 2015). For example, this index lacks provision for field-specific normalization for some of the quantitative indicators employed (e.g., research grant income). Moreover, its formal mechanisms also leave little scope for qualitative judgments when it comes to evaluating performances.
What is striking about the Australian case is not so much the mere appearance of these mimicking behaviors on all organizational levels, but the extent to which this has happened. Even at the leading universities in the country, the research performance indicators used internally, down to the level of individual performance evaluations, seem to largely conform to the ‘one-size-fits-all’ indicators employed at the system level. This is despite the fact that the distorting dimensions of these are clearly apparent.
In view of this ‘coercive’ isomorphism (see DiMaggio & Powell 1983) occurring across the various levels of university governance and organization in Australia, one may wonder about the broader effects of performance-based funding systems on universities in general.
One broader effect is apparent, although more research is required to determine its precise manifestations. Performance-based funding systems lead, for better or worse, to a decrease in diversity of institutional governance mechanisms, and therefore to a decrease in organizational diversity. It may thus be no coincidence that, in the Australian case, organizational diversity across the university sector declined markedly and rapidly following the establishment of a rather ‘heavy-handed’ performance-based funding system at the national level, as recent empirical research has shown (see Croucher & Woelert 2015).
Ultimately, the example of the Australian system suggests that, from a certain point on, an emphasis on performance-based funding mechanisms in the governance of universities may, above anything else, promote and reward conformance within the university sector. The problem with that is that, from a certain point on, such conformance may indeed result in unintended, detrimental effects, discouraging riskier and long-term approaches both to research and organizational strategy.
Croucher, G., & Woelert, P. (2015). Institutional isomorphism and the creation of the unified national system of higher education in Australia: An empirical analysis. Higher Education. Doi:10.1007/s10734-015-9914-6.
DiMaggio, P. J., & Powell, W. W. (1983). The iron cage revisited: Institutional isomorphism and collective rationality in organizational fields. American Sociological Review, 48(2), 147–160.
Geuna, A., & Martin, B. R. (2003). University research evaluation and funding: An international comparison. Minerva, 41(4), 277–304.
Gläser, J., Lange, S., Laudel, G., & Schimank, U. (2010). The limits of universality: How field-specific epistemic conditions affect authority relations and their consequences. In R. Whitley, J. Gläser, & L. Engwall (Eds.), Changing Authority Relationships in the Sciences and their Consequences for Intellectual Innovation (pp. 219–324). Oxford: Oxford University Press.
Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., & Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature, 520(7548), 429–431.