New incentives in academic publishing are designed to support replication
The marriage of mind to brain has existed for centuries. Yet the relationship between these faithful partners in cognition still confound scientists and philosophers. But researchers studying the mind and brain today say they can overcome the complexity of their data if barriers to replication are removed. The journey begins, they argue, with changes in the way they report their results – a solution that may help quell the crisis of confidence regarding replicability that pervades all of the life and social sciences.
Diverse tools for complex data
Neuroscientists and psychologists admit the intricacy of the human mind and brain contributes to the replication problems present in their fields. For example, 80,000 three-dimensional units compose a single functional magnetic resonance imaging (fMRI) scan. Neuroscientists use these scans to measure how brain activity “changes in different people, over time, as they respond to different stimuli,” says Eric-Jan Wagenmakers, a professor of psychological methodology at the University of Amsterdam in the Netherlands. And many argue fMRI scans and other available imaging techniques still do not capture brain function with enough accuracy and detail.
To understand their complex data, neuroscientists and psychologists require an equally elaborate set of analysis tools. These techniques allow scientists to differentiate the many interacting variables present in brain scans and psychological studies. They can nevertheless lead to a spectrum of results–adding to the complexity that already defines their subject matter. “If you give 30 research teams the same data set, they may reach 30 different conclusions,” says Wagenmakers. And all their analysis methods could be equally justified.
However, “if you can replicate a certain effect with multiple types of analyses, then it enhances the confidence in the finding,” reasons Wouter Boekel, a PhD student at the University of Amsterdam, who co-authored a replication paper with Wagenmakers, published in Cortex in 2015. Unfortunately, researchers rarely use their diverse statistical techniques for replication, adds Wagenmakers.
In addition to expanding the array of tools available to cognitive researchers, the advent of computation also increased the efficiency with which researchers can do their analyses. Compared to the pencil and paper method of yesteryear, modern techniques process large amounts of data at lighting speeds. But the ease with which researchers can analyse data today also increases the likelihood that they will run multiple analyses until they find a ‘positive’ signal and tweak their initial hypotheses accordingly, says Katherine Button, a postdoctoral fellow in psychology at the University of Bristol, UK. Cherry-picking data and analyses and changing hypotheses after-the-fact falls under what scientists call ‘questionable research practices.’
Publish or perish pressure
But why are scientists engaging in such practices in the first place?
Like many tragic tales in science, the story of cognitive research’s replication woes begins with the phrase, ‘publish or perish.’ “It’s good to push people, but the publication industry has reached a frenzy and this has real drawbacks,” says Wagenmakers. These drawbacks include classic disincentives for conducting robust and replicative research, adds Chris Chambers, a professor of cognitive neuroscience at Cardiff University in the UK.
In the rush to publish, researchers conducting studies in psychology and neuroscience favour small sample sizes because it expedites the experimental process, says Button. However, much cognitive research today centres on small and subtle effects. Combining small samples with small effects, studies that report a positive finding most likely found that effect out of luck. A researcher seeking to replicate these studies has two options: use the original sample size or substantially increase the sample size.
The first choice nearly guarantees a failed replication, says Button. To use a metaphor, it would be like attempting to find a needle in a haystack–twice. Increasing sample sizes is often not an option, as funds for this kind of research are scarce, adds Chambers. In sum, in addition to instigating behaviours that lead to false-positives–like tweaking hypotheses and cherry-picking analyses–the ‘publish or perish’ culture incentivises studies with small sample sizes. This, in turn, aggravates issues with the replication of true-positives.
The irony of ‘conceptual replication’
In psychology, the phenomenon of ‘conceptual replication’ epitomizes the relationship between replication and the publishing culture in mind and brain research today. ‘Direct replications’ attempt to replicate the original hypotheses of a study as precisely as possible. In contrast, ‘conceptual replications’ transform the original experiment into novel research by testing slightly different variables.
However, “when researchers fail to show an effect, their explanation will be: it’s obviously because we changed the experimental methodology, so you can never falsify any of the results,” says Chambers. He claims the same phenomenon occurs in neuroscience, though psychologists openly defend these practices. Relying on conceptual replication is “deadly” to the reliability of his discipline, he argues, as many consider falsification the quality that distinguishes science from pseudoscientific endeavours, like astrology.
Chambers, also a section editor at the journal Cortex, argues researchers confusingly combine the merits of exploratory research, like novelty and creativity, with hypothesis-driven research, like confirmation and reliability. There are, he believes, at least two reasons for this: to maximise any possibility for publication and to “shoehorn” their research into a “one-size-fits-all” publishing model. But exploratory science masquerading as hypothesis-driven research, and vice versa, ends up stunting the reliability of mind and brain research, he says, as no work is ever directly replicated.
Change in the wind
Chambers and colleagues at Cortex are taking steps to change the incentive structure to increase the number replication studies published in their discipline. Since 2013 Cortex has offered a new format called Registered Reports, where researchers pre-register their detailed methodology–including all hypotheses and statistical analyses–before the study is conducted. If the journal editors deem their methodology robust enough, the researchers are guaranteed publication regardless of their results. In 2013 the journal Perspectives on Psychological Science also kicked off a similar format called Registered Reports and Replications. In addition to directly incentivising replication studies, this publishing model “also eliminates the incentive for academics to engage in questionable research practices” by guaranteeing publication regardless of results, Chambers argues.
Cortex and Perspectives on Psychological Science were the first journals in science as a whole to integrate pre-registration into their publishing model. Since 2013, journals in fields ranging from cancer biology to political science have established similar publishing formats–suggesting pre-registration may apply to the life and social sciences as a whole.
But critics argue pre-registration curbs creativity and puts “science in chains.” Chambers and his colleagues at Cortex admittedly aim to establish more “stringent criteria” in their discipline with pre-registered, hypothesis-driven publishing formats. Nonetheless, they argue exploratory research is of equal importance to the progression and reliability of mind and brain research. In fact, the journal plans to establish another publishing format, Exploratory Reports, in the future.
When asked if pre-registration would limit exploration in neuroscience and psychology, Chambers says his colleague Daniël Lakens, an assistant professor in applied cognitive psychology the Eindhoven University of Technology in the Netherlands, best summed up the balance needed between structure and creativity in their field: “Science is like a sonnet. There is a structure within which scientists work, but that does not have to limit our creativity.”
Do you think this new way of making research will prove to be effective?
Your thoughts and opinions are valuable, feel free to use our simple comment section below.
Featured image credit: CC BY-NC 2.0 by TZA
EuroScientist is looking for contributors!
If you would like to write guest posts in EuroScientist magazine, send us your suggestions of articles at email@example.com.
Go back to the Special Issue: Reproducibility and replicability