You think that scientists, being quite clever people, would be able to agree on the best way to rank each other’s work. Oh no, not any longer.The issue really kicked off when recent Nobel Laureate and molecular biologist Randy Schekman professor of cell and development biology at the University of California at Berkeley, USA, accused the big three journals Science, Cell and Nature of “distorting science” by promoting their brands “in ways more conducive to selling subscriptions than to stimulating the most important research.”
Many agree, including Andrew Plested, a biophysicist based at the Leibniz Institute for Molecular Pharmacologie, FMP-Berlin, Germany, who has critiqued alternatives to peer review, referred to as altmetrics. “The journals run by publishers tend to keep the review process opaque and under tight control, because don’t necessarily have the best interests of scientists at heart.”Some counter-accused Schekman of hypocrisy. They accused him of using the journals to win the most coveted of scientific awards only to subsequently chastise these very same journals whilst plugging his own journal, eLife. The latter, Schekman says, offers improvements on the current system.
For this article, EuroScientist asked Science, Cell and Nature as well as eLife and independent commentators to go on the record with their thoughts on how they see the peer review system, as it stands, and what alternatives should be considered.
It used to be so simple. A scientist would send a paper to a journal. The latter would send it out to some enlightened mandarins in the field of study. These peer reviewers would then recommend to the editor whether to publish the paper or not. For around 100 years, this is how scientific papers have been published.But along came the internet.Free of the constraints of the price of paper, the number of journals has skyrocketed. From 2000-2010, the average annual growth rate of open access articles was 18% for the number of journals and 30% for the number of articles. This proliferation of papers – and fees to access them – has led to multi-billion euro profits for major publishers based in Europe.
The most popular among them exploit their journals’ higher Impact Factor to charge premiums to advertisers.Critics such as Schekman say they use Impact Factor, a metric enforced using their peer review system, to their own advantages. Since the advent of the instant commentary of social media, suddenly no-one agrees that the current system is fair. This is particularly true of younger researchers who cannot get a foothold in the premier league publications. As a result, they miss out on the funding opportunities that follow.
Schekman says eLife’s review system differs several critical ways. “At eLife the editor and the reviewers share their comments online and are identified to each other,” says Schekman. This differs from the tradition system when reviewers are blind—and in theory unbiased—to each other’s opinions. And Schekman says this offers advantages in terms of speed.
One of researchers’ biggest gripes is the time it takes to get from submission to publication. It is too long says, well, everyone who has submitted a paper and waited more than six months to see it in print, or waited three months only for it to be rejected. With eLife, authors receive an initial decision whether it will go further within three days. The median time to publication is [at time of writing] 87 days, according to Andy Collings, Managing Editor at eLife, in Cambridge, UK. The time spent by authors revising is 40 days, with 1.3 rounds of revision on average.
Transparency appears to be key to gain the respect of contributors. In case any of these stats change they are published on their website. “This is better than at Cell, Nature and Science,” says Schekman.His journal also has the provision for open peer review, so authors can know who reviewed their paper. But he admits that in practice few opt to do this, especially if the paper is rejected
There’s a culture change in progress and it will take time for people to adjust, according to Professor Stephen Curry, a structural biologist at Imperial College, London, UK, who has engages with wider debates in academia, such as the lack of usefulness of journal Impact Factors. “For sure there are risks in the openness – younger reviewers might pull their punches – but there appear to have been some success in open peer review.” He adds that eLife’s process is more than just tinkering with the details of the present system. “There is more direct consultation between reviewers and a policy of not demanding additional experiments.”
In a similar vein, Nature conducted an experiment with open review back in 2006 [in the interest of full disclosure, I need to confess that I was employed there at the time as Web Editor]. Veronique Kiermer, executive editor and head of researchers services at the Nature Publishing Group, the German-owned company that publishes Nature, describes the test as “worthwhile.” But she says the trial as too small at the time to allow open peer review to be a viable alternative to traditional peer review. “In particular, it was very difficult to obtain useful technical comments in an open peer review forum,” she says.
Obstacles to open review, such as lack of time and career incentives, are still be prevalent, according to Kiermer. “If it is an indication, the ability to comment on articles post-publication at the journal website, a functionality since introduced by Nature and other journals, has not yet been enthusiastically embraced by researchers,” she notes.But she says their journals are experimenting: Nature Climate Change and Nature Geoscience have recently started to offer double-blind peer review as an author option.
And following in the steps of PloS, the Nature journals are also publishing article-level metrics related to each paper, including citations, downloads, media and social media coverage. There are strong feelings emerging about the use of certain metrics as an alternative to peer review
Meanwhile, the US journal Science has also conducted limited trials. Monica Bradford, Science’s Executive Editor, based in Washington, DC, says that in 2013 they ran an experiment in which reviewers received anonymous comments from all the other reviewers. They, then, had 48 hours to provide additional thoughts prior to the editor’s decision on the paper.
“Based on the results of that experiment, Science now uniformly shares reviewer comments among the reviewers as the experiment demonstrated that the reviewers found this step informative, improved transparency and aided the editors in the decision-making process.”Bradford adds that journals now also provide alternative metrics, such as blog-posts, social media posts and traditional press coverage. This is in addition to article downloads on individual papers that may prove to be useful for the evaluation of scientists. “But it is too early to determine if these metrics will be useful in this regard,” she says.
Cell, a publication of Dutch publishing company Elsevier, on the other hand, does not appear to have done much but has plans to enhance transparency too. Cell Press consistently looks to enhance and innovate their service to authors and reviewers – including improvements in the peer review process, according to Cell’s Editor Emilie Marcus, based in Cambridge, Massachusetts, US. “To this end we have introduced a number of changes in recent years.
As one example we are currently exploring opportunities for reviewers to share comments on each other’s reviews in advance of an editorial decision,” she says. “We also provide an opportunity for post publication peer comments on all our published papers and author-facilitated discussions in share groups on [academic social network] Mendeley.”Sharing comments after publication is a move endorsed by Plested. “In addition to leveraging online tools like comments, the traditional peer review system can certainly be improved,” he says. “For example, EMBO Journal publishes the entire editorial correspondence,” he adds, “[the] Journal of Neuroscience forbids private comments to the editor and all comments go to the authors.”
Peer review can work, but it is not always the best system, according to pharmacologist and award-winning blogger David Colquhoun, based at University College London, UK, who co-authored the ‘Scientists don’t count’ altmetrics paper with Plested. “I think that peer review can work well, especially for the top rank of specialist journals. On the whole it works less well for the glamour journals [Nature et al.], because the editors have less specialist knowledge and because most papers submitted to them never get to the review stage at all.” Further down the status ladder, he says, peer review hardly works at all. “Far better to make it easy for the whole community to express their opinion in the comments section, [as is the case with] post-publication peer review,” he concludes.
Plested and Colquhoun are scathing of the trendy adoption of altmetrics: emerging index scoring systems that use data from blogs, articles and social media in addition to download numbers and reference citations. In their article they explain how it can undermine scientists and their work, and is too susceptible to gaming—also known as cheating—and can lead to the validation of bad science. Tweets can be paid for, for example, and much easier and cheaper than people.Their argument is persuasive. But does rubbishing alternative metrics just bring us back to traditional peer review?
The old system has been caught out recently, with everything from hoax papers on arsenic-containing DNA to the ‘sting’ by Science reporter John Bohannon who sent a gobbledegook paper to more than 300 peer-review open access journals. Many more published it than rejected it. And many would have taken the publication fee from the non-existent author Ocorrafoo Cobange.
Although the operation was rightly criticised for not sending the paper to subscription journals as a control, it was still an extraordinary exercise that revealed how poorly some peer review journals can perform.
Where does all this leave us?Clearly, all systems have their flaws. Peer review is not the bastion that many think it is, particularly at lower end journals and according to some at the higher up too. Fancy new metrics can be exploited, and leaving everything to comments’ fields after a paper has been published misses the point – there’s supposed to be a ‘bad science’ filter before publication
Perhaps it’s time to limit the number of papers or pages a scientist can publish. Plested says that about four years ago the German Research Foundation DFG limited the number of papers that an applicant can include to indicate their output and thus quality as a scientist to five. “This incentivises quality over volume. I would hope that other funding agencies would take similar steps.” That will only work long-term if other funding agencies adopt similar principles, which is some way off, but it will be interesting to see if the DGF continues on this path.
Or perhaps there are steps prior to peer review which could be useful and self-limiting. The ArXiv database started as a global repository for pre-publication data and materials for papers, but now serves as a wider discussion and editing super-forum where papers are submitted in a form close to as they would to a journal, and adapted according to the recommendations of the community. And although ArXiv is for physics, maths and computer science, an alternative for biology has arisen in bioRxiv and others such as viXra will surely follow.
So a system whereby pre-publication vetting is followed by a more open peer review, after which post-publication commentary takes hold could go some way to ensuring quality over quantity, and with a nod to rather than an obsession for alternative metrics.In the end, science is undertaken by people and it all comes back to how researchers behave. Good mentoring and taking the time to properly read a paper before adding to the volume of opinion, whether by social media or otherwise, is the only way to pass fair judgement on the worth of a piece of work. And, in a sense, that was how it was always done.
Go back to the Special Issue: Research evaluation
Go back to the Special Issue: Alternative research funding
Featured image credit: CC BY-SA 2.0 by Gideon Burton
- Top Trumped: what does the US election mean for science and Europe? - 20 December, 2016
- Sweet tooth: countering one of our most lethal addictions - 9 March, 2016
- Policy matters: transparency is rarely a bad thing - 7 October, 2015