How well do academics react to being measured?

Academics love to measure things. But how well do they react to being measured? In the UK, that question has been thrown into sharp focus by the Research Excellence Framework, dubbed REF. It is a massive exercise, in which every university in the land has been invited, to prove the quality of the research it undertakes. The institutions submitted their data at the end of November 2013. A year later, they will hear how they rate in every subject from music to agriculture. On the results side, this rating will influence the allocation of hundreds of millions of pounds sterling of research cash.

The REF is partly about classic measures of research quality such as highly-cited papers in scholarly journals. In addition, it looks at the ‘research environment,’ how well a university develops and supports research, and at ‘impact,’ the effects of research in the outside world. Impact covers anything from an engineering department developing a new alloy for aircraft engines to a professor of media studies speaking out on the future of the newspaper industry.

Jack McDonald, lecturer in war studies at King’s College London, is one of many academics who agree that the era is past when he and his colleagues could take public money without explaining what they are doing with it. Indeed, he would prefer a deeper rethinking of the role of academics in society. The existing REF, he says, encourages them to do a bit of public engagement to tick the impact box, and then retreat into their library or lab.

However, the real issue is what the REF tells us about the distribution of power in higher education. Roger Burrows, pro-warden of Goldsmiths, University of London, UK, says “It is clear that the REF was co-constructed by the Treasury [the UK finance ministry] and senior university managers. The results it will produce are predictable, but its real importance is as a managerial tool.” Managers will be able to use student demand to allocate teaching resources, and the REF to distribute research money, giving them far more power than in the past over the shape of individual universities.

My own belief is that despite diktats from university managers and education ministers, scholars tend to find a way to do exactly what they want to do. However, this will soon become much trickier for anyone in a British university whose department does not do well in the REF. They will find it near-impossible to get their research funded. In addition, some major UK funders will only fund PhD students at a limited range of universities. Taken together, these measures act to restrict research to an elite group of institutions.

Burrows adds that REF-like exercises have already been tried in Hong Kong, New Zealand, Australia and other English-speaking nations. In parallel, France and Germany have already run so-called Excellence Initiatives to concentrate research funding in fewer universities. And something similar has happened in China, Japan and many other Asian nations, such as Taiwan.

These exercises all have two things in common. They expand government control of higher education, and they grow the powers of university managers while weakening those of academics.

They also violate the basic understanding between academics and universities: poor pay and long hours, balanced by a high level of autonomy and the chance to explore new knowledge.

But in the end, these massive exercises in data analysis tend to confirm what people know anyway: not all universities are the same. An advanced economy requires many types of skilled individual, and many types of education to produce them, so this is as it should be. However, no nation has yet managed the tricky feat of ensuring that research-based universities don’t grab all the prestige, most of the top students, and most of the money. That would be a far bigger challenge, with far bigger potential rewards.

Featured image credit: CC BY-SA 2.0 by AJ Cann

Go back to the Special Issue: Research evaluation

Martin Ince

Chair of the Academic Advisory Board at QS World University Rankings
Science and education writer, Martin Ince was REF impact consultant to Goldsmiths, University of London, UK.
Martin Ince

Leave a Reply

Your email address will not be published. Required fields are marked *

One thought on “How well do academics react to being measured?”

  1. I too have helped a couple of universities in their latest REF submissions. (I did similar work on the previous RAE.) My contribution this time around was on the “impact” component of the submissions.

    There was no separate impact input last time around, but in REF2014 it counts for 20 per cent of the “marks”. After looking at dozens of submissions, covering many areas of research, the main lesson is that researchers are very bad at making the case for their impact. Some managed to leave out important impact that I knew they had achieved.

    As in other areas, the impact submissions confirm Martin’s view that “not all universities are the same”. Indeed, not all academics are the same. Nor are all subjects. So it is not possible to assess their impact using a single set of criteria. It will be interesting to see how the REF panels manage to compare them.