Reproducibility studies have yet to be factored in as positive contributions for research evaluation
Researchers are increasingly being judged on two interlinked criteria: the number of prestigious published papers, and success in pulling in grants and funding. The problem is that publishing and funding both prize one thing above all else: novelty. This obsession with shiny new science undermines the scientific tradition of self-correction. Reproducing and verifying other peoples’ work is a crucial but thankless task that does not earn high impact kudos for researchers. It is therefore unlikely to translate into research grants. Researchers often cannot afford to spend precious time and money reproducing others’ work, at the risk of seeing their own lab sink.
Reproducibility is now having a renaissance.Examples such as the US Reproducibility Initiative provide a case in point. It is backed by ScienceExchange, Figshare, PlOS and Mendeley. And it aims to identify and reward high-quality reproducible research via independent validation of key experimental results in the life science. But does buzzword status actually translate into real life change? It depends from which angle we look at the issue. The biggest progress has, in fact, been in promoting reproducible research.
Publishers support
Some of the largest academic publishers have been heavily involved in the debate. For example, in 2013, Nature took steps to improve the reporting of science. “We introduced a checklist of reporting requirements,” says Veronique Kiermer, director of author and reviewer services at the Nature Publishing Group, “We eliminated length limits on methods sections, increased the scrutiny of statistics by appointing a statistical advisor and re-emphasised our commitment to data-sharing.”
As for publishing reproducibility studies, refutations meeting peer-review standards can be published in the same journal. And it is “bi-directionally linked to the paper on the Nature website and at the major indexers,” according to Kiermer. Confirmation studies that add no new information are best suited to “minimal threshold journals, such as Scientific Reports,” she adds. To put this in context, the 2014 impact factor for Scientific Reports was approximately eight times lower than Nature-not much of a reward for the scientist who confirmed the original study.
By comparison, other publishers put some emphasis on greater levels of quality control during the publishing process to ensure greater reproducibility. “If you look at our journals, we do intensive data checks and manipulation checks,” says EMBO director, Maria Leptin.
Meanwhile, others believe that making research data more widely available could encourage more reproducibility. EMBO asks researchers to consider openly sharing their source data. Nature is following a similar route, according to Kiermer. “Recently, we have stressed the importance of depositing data in public repositories, launching a sister publication Scientific Data, where authors can publish Data Descriptors to enhance reusability of datasets,” she says.
Could funders use their financial clout to force researchers to undertake a certain amount of reproducibility work? “[Reproductibility studies are] not everybody’s expertise,” Leptin says, warning that “it requires a lot of thought before you just turn around and say we’ll require you to do this now.”
Awarding credit to researchers
The key to promoting reproducibility is to redefine it away from “a nuisance to a symbol of quality,” says Peter Kraker, a postdoctoral computer scientist in the Graz University of Technology, Austria specialising in visualisation of scholarly communications, and a 2013 recipient of a Panton Fellowship awarded to researchers actively promoting open data. He says that the first step is a “grassroots movement, [which] shows it is possible.” But then, he says “you need some top down rules or mandates.”
But if funders don’t mandate reproducibility studies and if high-impact publishers only publish high impact refutations, relegating confirmation studies to lower impact publications, where’s the incentive?
One solution was introduced in the concept of a Reproducibility Index (RI) for journals, It was proposed, in 2013, by Ivan Oransky, editor of the highly regarded blogs Retraction Watch and Embargo Watch and vice president of MedPage Today. Accordingly, journals would be rewarded with a high RI for both publishing reproducible research and for checking if studies were confirmed or refuted post-publication. The idea of an RI has, so far, not taken off but it did drag reproducibility that bit further into the spotlight.
Technology could be reproducibility’s saving grace in this debate. The challenge is how to give researchers credit for participating in online post-publication peer review communities such as, to name only a few, PubPeer, F1000, ScienceOpen and ResearchGate. Such post-publication mechanism made it possible for Kenneth Ka-Ho Lee from Hong Kong to post his results on ResearchGate Open Review section, after he failed to reproduce the now infamous Obokata experiments, which claimed to turn adult cells into stem cells by bathing them in acid. He was the first to come out with evidence that the method did not work.
Are there weak points to scientific replicability?
Your thoughts and opinions are valuable, feel free to use our simple comment section below.
Featured image credit: CC BY-NC 2.0 by ep_jhu
EuroScientist is looking for contributors!
If you would like to write guest posts in EuroScientist magazine, send us your suggestions of articles at office@euroscientist.com.
Go back to the Special Issue: Reproducibility and Replicability
- The controversial art of research management - 9 November, 2016
- Are the disruptions of uberisation a bane or boon for science? - 27 April, 2016
- Crowdsourcing France’s New Digital Law - 18 November, 2015