Death in academia and the mis-measurement of science

Universities are increasingly run like businesses hungry for performance benchmarks, disconnected from the way scientists themselves would like their research evaluated

Due to the high-profile coverage in the science community, the recent suicide of Stefan Grimm, a professor of toxicology in the Faculty of Medicine at Imperial College London, UK, has reignited the debate around the nature of research work. Universities are now more in the money-making education business rather than merely the business of education. And vice-chancellors’ pay packets rival those of top private-sector CEOs. So how should these universities judge their researchers? In turn, what are the pressures that universities are subjected to, making them assess their staff in certain ways? Moreover, what can be done to prevent dedicated and successful scientists feel so pressured and downtrodden as to take their own lives? EuroScientist looks for some elements of response to these crucial questions of how scientists are considered by their own institutions.

Grimm’s story

“If anyone is interested how Professors are treated at Imperial College: Here is my story.”

So began a valedictory email from Grimm to colleagues. Sadly, it was written just before his death in an apparent suicide. The email goes on to lambast the culture at Imperial, which he described as “publish and perish”, a dark and ironic twist on the “publish or perish” phrase mooted by stressed academics trying to keep up in the publications rat race.

Grimm went on to complain that “Grant income is all that counts here, not scientific output.” And detailed how the college was taking informal proceedings to remove him from his post. He had been told by email that he was “struggling to fulfil the metrics” of a professorial post at the institution, “which include maintaining established funding in a programme of research with an attributable share of research spend of £200k [€270,000 per annum]”. Grimm had had at least four papers published and had £130,000 –€170,000—of funding in the year before his death.

The story and email thread went viral after it was published by David Colquhoun, emeritus professor of pharmacology at University College London, UK. According to a follow-up article it has been viewed by more than 160,000 people from over 200 countries, clearly hitting a sensitive spot in people who work in or near the modern research landscape. “I think it is very much driven the perverse incentives of the vice chancellors who want to go to the top of the ranking tables,” says Colquhoun, who broke the story on his award-winning blog.

It is very difficult to assess the rate of death related to pressure academics are being subjected to across Europe. Reasons for committing the irreversible also vary considerably, while such incidents remain isolated. For example, a French philosophy academic, Marie-Claude Lorne, from the University of Brest took her own life in 2008 after she failed to be appointed in a permanent position. More recently, in 2012, Jürgen Koebke, the former director of the Anatomy Institute at the University of Cologne, Germany, did the same following an investigation over irregularities in the storage of corpses donated to medicine, which failed to be buried in a timely manner. Another case emerged in the UK at the end of 2014. It was that of Nigel Veitch, a senior phytochemist at Kew Garden, a plant research and conservation centre, in London, UK. He had also been found dead. According to The UK Times, the scientist had been told he would have to reapply for his own job following a financial shortfall and subsequent restructuring at Kew.

However, because it has been widely reported, the recent UK tragedy of Stefan Grimm, has focused attention on academia’s current problems. These issues of pressure in academia are not unique to the UK as educational reforms gather pace, such as in France for example, which plans a new mega-university to counter the countries absence in the global top 20. Following reform in the previous years, the universities were faced with the necessity to become financially and operationally autonomous from the State. The move has left many universities struggling. And it has resulted in undue pressure on researchers.

Top of the table

Colquhoun thinks that university vice chancellors should stop paying attention to ranking systems. “There are many of them and they all give different answers,” he says. “Even if you use the same inputs you can get any answer you want. It’s stupid beyond belief.”

By way of example, he cites the recent Research Excellent Framework (REF) survey in the UK, which assessed research in the UK’s higher education institutions over the past five years. He concedes it had sensible aspects, such as limiting the number of papers research groups could submit to four to prevent ‘gaming’ the system by flooding it with publications.

Nonetheless, Colquhoun says that adjusting the weightings of the various input measures you can lead to any answer. “10% to impact, 10% to student numbers, 20% to publications, so the weights attached are quite arbitrary,” he says.

Universities run like businesses

The trouble is that universities are copying businesses in terms of setting targets for things like grant value and publications in certain high-ranking journals, according to Rob Briner, a professor of organisational psychology at the University of Bath, UK. “One of the bad things about this is that they are fantasy targets. They are completely unrealistic,” comments Briner.

The last 25 years have seen numerous targets and rankings emerge as competition between universities has gone global. And this competition, Briner says, has seen the very principles of university research fundamentally undermined. He believes that such institutions need to ask themselves what they are actually trying to achieve.

Get high quality articles on Science policy, innovation and society delivered directly to your inbox.

We respect your privacy.

Findings of a 2014 report by the Brussels-based European University Association entitled Rankings in Institutional Strategies and Processes, concludes: “[Higher Education Institutions (HEIs)] should understand the limitations of rankings and not let themselves become diverted or mesmerised by them. HEIs should not use rankings to inform resource allocation decisions, nor should they manipulate their reporting of statistical performance data in order to rise in the rankings.” To avoid such issues, the study suggests a solution: “Ultimately, to overcome problems associated with inappropriate indicators used by rankings, should there be an international common dataset on higher education which would facilitate greater and more meaningful comparability? As challenging as it may be to find consensus on such a dataset, it might be worth exploring the possibility.”

Inadequate short-termism

But what about the general interest in research, the teaching and education, and challenging and changing the way people think – universities still want that, don’t they? “I don’t think that’s what universities really want, even if they say it’s what they really want,” says Briner. “They want people that will just produce results. However they chose to define results this year, or next year or over the next decade.”


Briner thinks that if you want to do blue-sky research then maybe universities aren’t the place anymore. “Maybe industry is now where they take a much longer term view. You can keep trying and maybe not get results, but in universities that wouldn’t be acceptable now because of the short-termism.”

The problem for everyone involved—vice-chancellors included—is that people take note of university rankings: from foreign students paying expensive tuition fees, to higher education funding bodies that use assessments like REF to allocate core grants for institutions. Money follows money, and this leads to enormous pressure on researchers.

Collateral damage for scientists

American universities are prime examples of academic institutions run like businesses. As a result, academics are subjected to intense pressures. This is what reveals an article entitled Top 10 reasons being a university professor is a stressful job, by David Kroll, a pharmacologist and science writer formerly based at North Carolina State University, in Raleigh, USA. “You can never have enough grants at most biomedical research universities,” he writes, along with staff “actually be threatened by their supervisors with being replaced… if they can’t score grant funding.”

Although the UK context is different, these are precisely the pressures that clearly weighed heavy on Stefan Grimm, who must have experienced first-hand exactly what Kroll details: “a professor with tenure who is deemed unproductive by whatever anonymous review can certainly be made to wish they didn’t have a job.”

Pushed to the limits

This pressure has been quantified in recent studies. For example, a survey of Dutch academics published in 2013 revealed a high level of burnout among medical professors in The Netherlands. At 24%, it was higher than the national average of 8% to 11%. They also noted that publication pressure correlates positively with burnout symptoms, particularly in the domain of emotional exhaustion.

What further frustrates researchers is that some metrics are out in the open, like REF scores and the H-index, but others are not. Did Grimm ever know that this £130,000 of grant money should have been £200,000, for example?

The incentives of university targets can be questioned, as they are heavily oriented to quantitative aspects such as grant amounts, number of publications and citations, rather than qualitative ones. That’s according to study author Joeri Tijdink, a psychiatrist and researcher at the VU University Medical Center, Amsterdam, Netherlands. In the Netherlands universities judge their researchers on different metric scores. “You don’t easily get fired for not meeting the criteria, although facilitation of your research, or your position as professor, may be at stake if you repeatedly underperform,” he says.

Evaluation metric slaves

Tijdink says he knows several talented young researchers who have decided to quit their academic positions. “These kinds of metrics should not be as dominant as they are now. But should be complemented with ‘soft’, qualitative targets such as the ability to inspire younger scientists, educational skills, and reward of ‘slow’ but excellent science,” notes Tijdink.

Metrics used to judge researchers should be as open and transparent as possible, according to Bastiaan Verweij, Spokesperson for Public Affairs at VSNU, the Association of Universities in the Netherlands. “Universities have a desire that researchers are able to raise their own grants, but not in a way that you’ll be judged only on the basis of grants and publications,” says Verweij.

He explains that all research at Dutch universities is assessed every six years using the Standard Evaluation Protocol, an exercise performed in some shape or form in most European countries. And he stresses that the number of evaluation criteria has been reduced from four to three, being scientific quality, societal relevance and viability. “The old criterion of productivity has been dropped as an independent criterion. With this the current criticism that the pressure to publish has gone a step too far in many disciplines has been acknowledged,” says Verweij. “No longer including productivity as a separate assessment criterion also gives a clear signal that ‘more is not necessarily better’.”

What not to do

There have been reports in the UK, for example, that the REF assessment exercise fostered an increased atmosphere of competitiveness and bullying at UK universities. Namely, 81% of those in the ‘Russell Group’ universities cited the REF as having had a negative impact on bullying. Sadly, for all the criticism REF has received as being ‘not fit for purpose’, it’s hard to see exactly what it should be replaced with that would not be subject to much the same problems. When witnessing accusations institutional gaming in hiring certain staff in the lead up the assessment, it is easy to take sympathy with Colquhoun’s desire to see university rankings abolished “because they are only produced for commercial gain”.

It is unlikely that universities will revert back to the old ways of being run. They are likely to continue being running as businesses for a long time ahead, even in the face of criticism for setting financial trader-like targets for grant income. Hence, in terms of personnel assessment, it’s time for universities to come clean. And to be transparent on any and all means by which they are judging their researchers so that they, at the very least, know what they are letting themselves in for. Metrics must be transparent, even if this risks people ‘gaming’ the system.

Besides, the culture of evaluating everything and everyone through metrics is in full swing in the information age. So much so that going back to old-fashioned pals, panels and recommendations is barely worth suggesting. Grimm wrote in his email that “As we all know hitting the sweet spot in bioscience is simply a matter of luck, both for grant applications and publications”. Many researchers will feel similar frustration. Surely we can do better than this?

How do you feel about universities being run like businesses?
Have you ever been put under high pressure due to your performance in your job?

Your thoughts and opinions are valuable, feel free to use our simple comment section below.

Featured image credit: CC BY 2.0 by r. nial bradshaw

EuroScientist is looking for contributors!

If you would like to write guest posts in EuroScientist magazine, send us your suggestions of articles at office@euroscientist.com.


Arran Frood

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

8 thoughts on “Death in academia and the mis-measurement of science”

  1. Sadly I also know of a number of young academics who have given up promising careers due to the pressures put on them by the universities and I decided to take early retirement in order to “get out from under”. Also there is a very small window of opportunity to become established as a “researcher” and if you don’t make it you become sidetracked by massive teaching loads or into administrative positions. So all round it certainly isn’t what it used to be.