A light bulb in front of en eye, illustrating research integrity

From fraudsters to fudgers: research integrity is on trial

Bad behaviour is omnipresent in science. It encompasses everything from outright scientific fraud, such as falsifying data, to other misconducts like cherry-picking data, favourable-looking images and graphs, and drawing conclusions that are not backed up by the actual facts. Overall, it matters more serious than keeping a sloppy lab notebook that no-one else can follow. This raises the deeper question: what drives scientists to behave in such a way? Typically considered quite clever people, why would a researcher behave in a way when, theoretically, one of the backbones of science – results’ replicability – is bound to get you in the end? And that is just one of the watchmen: university colleagues and journal editors must be duped too.

Some estimate that 1-2% of papers contain fabricated data. The output of websites such as Retraction Watch clearly shows that the number of retractions is increasing. “Research is mostly an activity of more than one person, so we should look in how far an individual can take autonomous decisions,” says bioethicist Kris Dierickx based at the University of Leuven, Belgium, whose has found that guidance on research integrity differs across Europe. “A study of the contexts is crucial: is misconduct a consequence of the system, or is it generating specific characteristics of personalities?”

Recently, research findings that turned out too good to be true made the headlines in the scientific press. At the beginning of the year, Nature published a research paper and a letter that appeared to revolutionise stem cell biology: blood cells could revert back to a type of stem cell just by being stressed in a mild acid solution. They made headlines around the world. The finding made an overnight star of young female researcher Haruko Obokata from Japan’s RIKEN Centre for Developmental Biology, in Kobe. She was backed up by co-authors including Charles Vacanti, a tissue engineering expert, from Harvard University, Cambridge, Massachusetts, USA.

A couple of months later, it appears the paper will be retracted amid claims of misconduct and data falsification. Other groups have failed to reproduce the results. Discrepancies, such as the use of an image from her doctoral thesis, have been found. And a RIKEN investigation has charged Obokata with two counts of misconduct.

In every field a faker

The use of duplicate images in the Obokata affair is reminiscent of the 2005 Hwang debacle, when major ‘advances’ in the stem cell field were eventually retracted from the journal Science. But developmental biology, where significant riches lie in successful regenerative medicine, is not the only field where fraud and misconduct occurs.

In fact, no sector seems immune.

2014 has also seen the retraction of more than 120 papers by published Springer and IEEE, many of which were meaningless computer generated conference proceedings in the computer sciences, as well as engineering. Then there was physicist Jan Hendrik Schön, a high-flying researcher at Bell Laboratories, who had used the same graph umpteen times to show anything from nanoscale transistors to superconductivity in unexpected materials. Nobody likes to recount the MMR fraud by Andrew Wakefield in the medical arena linking the MMR vaccine to autism.

The psychological and social sciences are not immune either. The fall of respected Diederik Stapel, dubbed the ‘lying Dutchman‘, shocked the community and affected 30 peer-reviewed papers, derailing the careers of many of his PhD students.

The Stapel case highlights the fact that journals requiring scientists to submit total data plus all materials would make fraud detection much easier. This is the opinion of social psychologist Wolfgang Stroebe from the Utrecht University in the Netherlands, who has analysed the Stapel affair. “This material should be stored and made available to other researchers, who request it,” he says. “You have to remember that Stapel usually did not even make up a data set.” He adds that doing so is much more complicated than merely faking a few means and statistical test values. He also points out that many Dutch universities have taken this step. And he believes the major publishers in the area, such as the American Psychological Association, should do the same.

Publish or perish pressure

It has been reported that Stapel felt pressured by the academic environment and its onus on publishing papers. After all, a study in a top journal can lead to more grants and more papers. This is how assistant researchers move up the chain to having their own research group.

The pressure to publish is certainly one factor, since most of the misconduct comes out in the publication process, according to historian of research policy and integrity professor Nicholas Steneck, who has been studying integrity in research for over 25 years. “There has always been pressure to publish in the best journals,” says Steneck, who is director of the Research Ethics and Integrity Program from the University of Michigan, in Ann Arbor, USA, “Therefore, the current publication system needs to be changed so that it places more attention on quality and integrity and less pressure on prestige and number of publications.”

Too few research leaders take integrity seriously, he believes. And there is little meaningful support for training in the responsible conduct of research and obvious steps to reduce fraud, such as data audits, he also notes. “One case of misconduct can cost a university between a half and one million dollars [€0.7 million],” says Steneck, “Very few universities spend more than one-tenth this amount on training.”

Other concur that the present ‘publish or perish’ culture has contributed to the prevalence of misconduct. “I think we need to de-emphasise metrics—number of publications, citation index, number of citations in high impact journals, H-factor etc.—and to search for additional ways to measure quality,” says psychologist Pieter Drenth. He has authored a report A European Code of Conduct for Research Integrity for ALLEA, the federation of All European Academies, where he is Honorary President.

Like Steneck, he cites further training, as well as increasing watchfulness by supervisors, collaborators and reviewers as ways to reduce misconduct, including full responsibility of co-authors for whole publication. “A lot can be done,” he says. He cites the development of a clear system of identification, investigation of allegations, and of punishment for scientists found guilty and of protection for whistle blowers.

Personality and peer pressure

It is notable that fraudsters are usually acting on their own. Whole lab groups are rarely if ever implicated, and the whistle blowers are often junior lab members or suspicious co-authors. A scientists may be inclined to bad behaviour if they either see less risk in the misconduct or more reward in the gain, according to decision-making psychologist Marianne Promberger from King’s College London, UK.

“It is also worth remembering that a big problem for science is not direct criminal fraud but ‘fudging’ to get a desired result,” she says, quoting several examples of doubtful practices such as “p-hacking, dropping outliers until the analysis comes out ‘right’, and stopping a study once results are significant.” She thinks that for this type of misconduct, a lot of peer-observation is going on. “If everyone else in the lab does this or that ‘p-hacking’ measure, then surely it’s not so bad?”

Remedial actions

No legislation or guidelines are foolproof and some cheaters will always get through, according to Promberger. Like Stroebe, she thinks journals requiring publication of the data alongside the article would make misconduct harder to pull off. “This means publications can be scrutinised with tools such as Uri Simonsohn’s ‘p-curve’, which looks across studies or papers for an unusually high density of p-values just below the ‘significant’ cut-offs,” she adds.

She also cites the example of a paper that was retracted when the raw data revealed averages with a suspicious amount of whole numbers, where were the decimal points usually associated with means. “While data posting cannot find all frauds, it makes it much harder to hide fraudulent data, at very little cost to honest researchers.”

It is difficult to disagree. And some journals are taking steps in this direction, although stopping short, for now, of making it mandatory. One solution is to develop and disseminate policies and practices that promote data sharing, sound research practices, and reproducibility. This is the commitment taken by the Association for Psychological Science (APS), according to Alan Kraut its executive director. The APS publishes key journals in the psychological and social sciences including those by the fraudster Stapel.

“We received some 25 laboratory applications to replicate the first [experimental dataset] offered [to third party scrutiny]; a much higher rate than expected,” he says. “That is the path we are taking to get changes to take place from the ground up versus with imperatives from above.” Kraut also adds that they have several other initiatives planned for the near future aimed at changing the research culture.

Time for change

Although it is debated whether outright misconduct is on the up, it is clear from the increasing number of retractions that change is required. Journals must press scientists to make their data accessible and make it mandatory as much as possible. At the university level misconduct reviews and hearings to be more transparent and open to scrutiny. Too often they are held behind closed doors without comment, as in the case of immunologist Alirio Melendez; universities do not explain why researchers have resigned, as in the 2012 Sanna case.

At the national level—and beyond, in the case of Europe—harmonisation of policies and guidelines cannot hurt. More needs to be done to train and mentor young scientists to protect them from the effects of misconduct. Also more has to be done to encourage them to speak up, and to protect whistleblowers.

As when athletes are caught doping, there is something satisfying and disappointing in catching misconduct in science. It shows the controls are working to some extent. But it also reminds us that unscrupulous behaviour is prevalent in all scientific fields. Some cheats will always beat the system, at least for a while.

However, with the processes of the traditional peer-review process under threat from the open access movement and journals where you can just pay to get your work published, there is a clear and urgent need for focused and immediate action from the top-down and the bottom-up to track down research misconduct more systematically.

Featured image credit: CC BY 2.0 by Marko Milošević

Go back to the Special Issue: Ethics, values and culture driving research

Arran Frood

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

2 thoughts on “From fraudsters to fudgers: research integrity is on trial”