This post is designed to allow our readers to convert the full issue into a single PDF file, that can be read offline or in print.
Welcome to this Special Issue of EuroScientist on: Ethics, values and culture driving research!
This issue will dive into the darkest corner of what scientific minds are capable of contriving to get to the goal of being funded and progressing in their career.
Gaming the system: who is responsible?, by Sabine Louët, editor, EuroScientist
Finding the ethics needle in research haystack
From fraudsters to fudgers: research integrity is on trial, by Arran Frood, UK
Exclusive interview: The pressures making scientists go off-piste: Nicholas Steneck, Michigan Institute for Clinical and Health Research, USA. Part of ESOF 2014 special sponsored advance coverage.
The culture and values influencing science
Is the culture of research encouraging good science?, Catherine Joynson, Nuffield Council on Bioethics, UK.
See also our previous coverage on related topic
- Scientists’ dreams: a society supporting science and respecting its autonomy, by Hans Peter Peters, Research Centre Jülich, Germany
The public perception on misdemeanours in research
Does media coverage of research misconduct affect public confidence in science?, Maria Lindholm and colleagues, Vetenskap & Allmänhet, Sweden. Part of ESOF 2014 special sponsored advance coverage.
Science communication: putting the cart before the horses, by Jens Degett, Denmark
See also our previous coverage on related topic
- The abuse of science, by Gerry Byrne, Ireland
The brownie point: when artistic values drive research
Shrinking humans: an artist’s perspective on the sustainability challenge, by Arne Hendriks, The Netherlands
Featured image credit: CC BY 2.0 by the NASA Goddard Space Flight Center
Blaming increase in fraud and unethical behaviour observed in science on a lack of rigour among the emerging ranks of PhDs may appear blatantly reductionist and reactionary. Indeed, the most publicised and serious cases do come from much more senior scientists.
In fact, some might argue that we have been looking and detecting misconduct more systematically than ever before. At the same time, there is a growing movement to raise awareness of scientists’ responsibilities and better equip them to face the pressures to publish more and seek extra funding.
Yet, scientists do not exist in a vacuum. They are the product of an educational and research system with values that heavily influences their choices.
Some areas of science, such as psychology and sociology among others, could shed some light on what makes some researchers stick to the prescribed rigorous scientific method—abiding by principles of rigour, honesty, openness, fairness and duty of care—while others bypass such an approach and use their imagination to sustain their fraud. Notwithstanding personal responsibility in scientific misconduct, such behaviour is also the result of a system.
Specifically, researchers are the pure product of the culture they live in. A recent initiative on the culture of scientific research, led by the Nuffield Council of Bioethics, in the UK, is planning to study the cultural habits of contemporary scientists to find out whether the current culture of research is encouraging good science.
Culture of course is strongly linked to the institutional settings of today’s educational and scientific endeavour. And few nowadays doubt that the strong focus on numbers of publications leads to stress and not necessarily to good publications. The system of peer review may also have overstretched itself. It would be interesting to find out the experiences and views of the EuroScientist readers. Feel free to share your thoughts, in the comment box below.
Overall, the increasing discovery of scientific misconduct has had major consequences. A primary one is the associated negative publicity that has affected the public perception of scientists. It has also revived the dissensions in what values ought to be the focus of research policy, leading some to believe that modern fundamental science is nearing its twilight. It is therefore time to get a closer look at the culture and values driving European research. One does wonder why this has not been done much sooner.
Featured image credit: kentoh via Fotolia
Sabine Louët, Editor, EuroScientist
Bad behaviour is omnipresent in science. It encompasses everything from outright scientific fraud, such as falsifying data, to other misconducts like cherry-picking data, favourable-looking images and graphs, and drawing conclusions that are not backed up by the actual facts. Overall, it matters more serious than keeping a sloppy lab notebook that no-one else can follow. This raises the deeper question: what drives scientists to behave in such a way? Typically considered quite clever people, why would a researcher behave in a way when, theoretically, one of the backbones of science – results’ replicability – is bound to get you in the end? And that is just one of the watchmen: university colleagues and journal editors must be duped too.
Some estimate that 1-2% of papers contain fabricated data. The output of websites such as Retraction Watch clearly shows that the number of retractions is increasing. “Research is mostly an activity of more than one person, so we should look in how far an individual can take autonomous decisions,” says bioethicist Kris Dierickx based at the University of Leuven, Belgium, whose has found that guidance on research integrity differs across Europe. “A study of the contexts is crucial: is misconduct a consequence of the system, or is it generating specific characteristics of personalities?”
Recently, research findings that turned out too good to be true made the headlines in the scientific press. At the beginning of the year, Nature published a research paper and a letter that appeared to revolutionise stem cell biology: blood cells could revert back to a type of stem cell just by being stressed in a mild acid solution. They made headlines around the world. The finding made an overnight star of young female researcher Haruko Obokata from Japan’s RIKEN Centre for Developmental Biology, in Kobe. She was backed up by co-authors including Charles Vacanti, a tissue engineering expert, from Harvard University, Cambridge, Massachusetts, USA.
A couple of months later, it appears the paper will be retracted amid claims of misconduct and data falsification. Other groups have failed to reproduce the results. Discrepancies, such as the use of an image from her doctoral thesis, have been found. And a RIKEN investigation has charged Obokata with two counts of misconduct.
In every field a faker
The use of duplicate images in the Obokata affair is reminiscent of the 2005 Hwang debacle, when major ‘advances’ in the stem cell field were eventually retracted from the journal Science. But developmental biology, where significant riches lie in successful regenerative medicine, is not the only field where fraud and misconduct occurs.
In fact, no sector seems immune.
2014 has also seen the retraction of more than 120 papers by published Springer and IEEE, many of which were meaningless computer generated conference proceedings in the computer sciences, as well as engineering. Then there was physicist Jan Hendrik Schön, a high-flying researcher at Bell Laboratories, who had used the same graph umpteen times to show anything from nanoscale transistors to superconductivity in unexpected materials. Nobody likes to recount the MMR fraud by Andrew Wakefield in the medical arena linking the MMR vaccine to autism.
The psychological and social sciences are not immune either. The fall of respected Diederik Stapel, dubbed the ‘lying Dutchman‘, shocked the community and affected 30 peer-reviewed papers, derailing the careers of many of his PhD students.
The Stapel case highlights the fact that journals requiring scientists to submit total data plus all materials would make fraud detection much easier. This is the opinion of social psychologist Wolfgang Stroebe from the Utrecht University in the Netherlands, who has analysed the Stapel affair. “This material should be stored and made available to other researchers, who request it,” he says. “You have to remember that Stapel usually did not even make up a data set.” He adds that doing so is much more complicated than merely faking a few means and statistical test values. He also points out that many Dutch universities have taken this step. And he believes the major publishers in the area, such as the American Psychological Association, should do the same.
Publish or perish pressure
It has been reported that Stapel felt pressured by the academic environment and its onus on publishing papers. After all, a study in a top journal can lead to more grants and more papers. This is how assistant researchers move up the chain to having their own research group.
The pressure to publish is certainly one factor, since most of the misconduct comes out in the publication process, according to historian of research policy and integrity professor Nicholas Steneck, who has been studying integrity in research for over 25 years. “There has always been pressure to publish in the best journals,” says Steneck, who is director of the Research Ethics and Integrity Program from the University of Michigan, in Ann Arbor, USA, “Therefore, the current publication system needs to be changed so that it places more attention on quality and integrity and less pressure on prestige and number of publications.”
Too few research leaders take integrity seriously, he believes. And there is little meaningful support for training in the responsible conduct of research and obvious steps to reduce fraud, such as data audits, he also notes. “One case of misconduct can cost a university between a half and one million dollars [€0.7 million],” says Steneck, “Very few universities spend more than one-tenth this amount on training.”
Other concur that the present ‘publish or perish’ culture has contributed to the prevalence of misconduct. “I think we need to de-emphasise metrics—number of publications, citation index, number of citations in high impact journals, H-factor etc.—and to search for additional ways to measure quality,” says psychologist Pieter Drenth. He has authored a report A European Code of Conduct for Research Integrity for ALLEA, the federation of All European Academies, where he is Honorary President.
Like Steneck, he cites further training, as well as increasing watchfulness by supervisors, collaborators and reviewers as ways to reduce misconduct, including full responsibility of co-authors for whole publication. “A lot can be done,” he says. He cites the development of a clear system of identification, investigation of allegations, and of punishment for scientists found guilty and of protection for whistle blowers.
Personality and peer pressure
It is notable that fraudsters are usually acting on their own. Whole lab groups are rarely if ever implicated, and the whistle blowers are often junior lab members or suspicious co-authors. A scientists may be inclined to bad behaviour if they either see less risk in the misconduct or more reward in the gain, according to decision-making psychologist Marianne Promberger from King’s College London, UK.
“It is also worth remembering that a big problem for science is not direct criminal fraud but ‘fudging’ to get a desired result,” she says, quoting several examples of doubtful practices such as “p-hacking, dropping outliers until the analysis comes out ‘right’, and stopping a study once results are significant.” She thinks that for this type of misconduct, a lot of peer-observation is going on. “If everyone else in the lab does this or that ‘p-hacking’ measure, then surely it’s not so bad?”
No legislation or guidelines are foolproof and some cheaters will always get through, according to Promberger. Like Stroebe, she thinks journals requiring publication of the data alongside the article would make misconduct harder to pull off. “This means publications can be scrutinised with tools such as Uri Simonsohn’s ‘p-curve’, which looks across studies or papers for an unusually high density of p-values just below the ‘significant’ cut-offs,” she adds.
She also cites the example of a paper that was retracted when the raw data revealed averages with a suspicious amount of whole numbers, where were the decimal points usually associated with means. “While data posting cannot find all frauds, it makes it much harder to hide fraudulent data, at very little cost to honest researchers.”
It is difficult to disagree. And some journals are taking steps in this direction, although stopping short, for now, of making it mandatory. One solution is to develop and disseminate policies and practices that promote data sharing, sound research practices, and reproducibility. This is the commitment taken by the Association for Psychological Science (APS), according to Alan Kraut its executive director. The APS publishes key journals in the psychological and social sciences including those by the fraudster Stapel.
“We received some 25 laboratory applications to replicate the first [experimental dataset] offered [to third party scrutiny]; a much higher rate than expected,” he says. “That is the path we are taking to get changes to take place from the ground up versus with imperatives from above.” Kraut also adds that they have several other initiatives planned for the near future aimed at changing the research culture.
Time for change
Although it is debated whether outright misconduct is on the up, it is clear from the increasing number of retractions that change is required. Journals must press scientists to make their data accessible and make it mandatory as much as possible. At the university level misconduct reviews and hearings to be more transparent and open to scrutiny. Too often they are held behind closed doors without comment, as in the case of immunologist Alirio Melendez; universities do not explain why researchers have resigned, as in the 2012 Sanna case.
At the national level—and beyond, in the case of Europe—harmonisation of policies and guidelines cannot hurt. More needs to be done to train and mentor young scientists to protect them from the effects of misconduct. Also more has to be done to encourage them to speak up, and to protect whistleblowers.
As when athletes are caught doping, there is something satisfying and disappointing in catching misconduct in science. It shows the controls are working to some extent. But it also reminds us that unscrupulous behaviour is prevalent in all scientific fields. Some cheats will always beat the system, at least for a while.
However, with the processes of the traditional peer-review process under threat from the open access movement and journals where you can just pay to get your work published, there is a clear and urgent need for focused and immediate action from the top-down and the bottom-up to track down research misconduct more systematically.
Featured image credit: CC BY 2.0 by Marko Milošević
Arran is a freelance journalist, based in London, UK.
Nicholas Steneck is director of the Research Ethics and Integrity Program of the Michigan Institute for Clinical and Health Research and professor emeritus of history at the University of Michigan, in Ann Arbor, USA. He is also a consultant to the US Government Federal Office of Research Integrity, HHS.
He has published articles on the history of research misconduct policy, responsible conduct of research instruction, the use of animals in research, classified research and academic freedom, the role of values in university research, and research on research integrity. Most recently, he authored the ORI Introduction to the Responsible Conduct of Research.
In this exclusive interview to the EuroScientist, he shares his views on the pressures that influence scientists in failing to observe the strictest ethical code of conduct.
“There is much more attention to integrity in research today than there was 15 or 20 years ago,” he says,”and therefore, people are more aware of the problems that exist so that the things that might have been common and gone on in the past and nobody paid attention to, such as questionable authorship practices or bad data interpretation practices, the sort of tings that, kind of, went and passed unnoticed are now being picked up. These things look worse because we are looking more carefully. Whether researchers are behaving with less integrity, I don’t know.”
He refers specifically to the main types of pressures that scientists are being subjected to.
“There are two pressures that I would point to. One is the pressure to publish….the more you publish, the more recognition you get, and therefore presumably, more rewards. So the enormous pressure to publish, particularly on young researchers, is one of the pressures that causes them to chop papers into small pieces, to stretch the data and so on and so forth.”
He adds: ” The second one, for the established researchers, is the pressure to get funding. Funding is tighter today. More and more universities require researchers to bring in funding. The more funding they bring in, the more successful they are…. the pressure to publish and the pressure to get funding is what drives researchers to cut corners …”
Regarding the standards of ethics that might have become looser, he points to the lack of rigour.
During peer review, for example, “there is no a lot of rigour in many areas in looking at the statistical analysis, the quality of the data, there is so much to look at, reviewers cannot look at the data to see whether the data supports what is going on.” He adds: “The kind of stresses that have researchers doing many different things are the things that causes them to look less rigorously at what is going on”.
One example is that scientists ” rely on the abstract, rather than reading at the entire paper. That is not until you read the entire paper that you actually look at the data, the interpretation and so on… Abstract are not accurate, abstract oversell articles. And yet we don’t have the time to go ahead and read everything the details and analyse the data. That’s where the rigour is breaking down, simply because of the stresses on researchers.”
In addition, he mentions the emergence of scam research papers, which are not peer reviewed.
He also refers to the fact that the peer review process is increasingly computerised, which makes it possible for researchers to infiltrate the review process and create their own review. “It is just a variety of specific ways in which the internet has loosened things up a little bit and makes it harder to control good quality research.” However, ” what is happening today, more and more countries are putting in regulations.”
To avoid such issues, he recommends introducing training to “teach new researchers about their responsibilities and about the pressures they are gonna face.” He also calls upon research leaders to stand up and promote the need for research integrity.
This is a post sponsored by ESOF 2014. Nicholas Steneck will share his views as part of a discussion at the ESOF 2014 conference, to be held in Copenhagen from 21 to 26 June 2014. Specifically, he is invited to speak at a session entitled ‘Fifty shades of deceit – key tools and processes for maintaining the integrity of the scientific record.’
Featured image credit: Nicholas Steneck
More than 120 papers have been withdrawn from subscription databases of two high-profile publishers, IEEE and Springer, because they were computer generated thanks to the SCIgen software designed to generate random computer science research papers. The trouble is that they had no meaning at all. All of them were labelled as peer reviewed and all of them were published in proceedings of actual conferences. In theory, these non-sense papers have been reviewed, presented and questioned by a chairman. This leads to quite an odd situation.
The two publishers that were impacted reacted in two very different ways. IEEE, with more than 100 SCIgen paper published, is not directly facing up to the problem and is just deleting the papers without notification. Titles of papers can still be found in the table of contents of conferences but are not available. The Springer response seems much more adequate. It has published a retraction notice alongside each fake paper while announcing that measures to deter future copycats have been taken.
Incidentally, most of these offending papers where related to conferences that took place in China. And the phenomenon may stem from some particularities of the way certain conferences are organised in the country. Nevertheless, all those non-sense publications have found their way through the scientific publications system, even though it should never have happened.
This can be explained by several factors, which are substantially changing the way the scientific community shares its knowledge. On the one hand, technological developments have made the writing, publication and dissemination of documents quicker and easier. On the other hand, the pressure of individual evaluation of researchers—the proverbial publish or perish—is changing the publication process. This combination of factors has led to a rapid increase in scientific document production.
In a sense, one could say that the global knowledge is growing ever faster than before. The presence of this junk science can be interpreted as a side effect of the ‘publish or perish’ paradigm combined with a blind evaluation based on abstract numbers. As a result, such situation is, in itself, is a clear incitation to fraudulence.
Text mining tools, like the SCIgen detection solution, are very efficient in identifying SCIgen papers, duplicated papers and plagiarisms. But the use of such tools is a ‘quick and dirty’ response to the problem. The situation is like if a kind of spamming war started at the heart of science. The phenomenon is taking place precisely at the very heart of science, because knowledge diffusion is at the heart of science too. It is a spamming war, because exerting high pressure on scientists mechanically leads to too prolific and less meaningful publications even if they are not non sense.
In quantum mechanics, the act of measuring a system results in that very system being disturbed. This adage is true in physics, but also in computer science and in scientometrics and bibliometrics. By aiming at measuring science, these approaches are perturbing scientific processes, particularly when used for management purpose.
Lecturer Joseph Fourier University, Computing Laboratory (CNRS, LIG), University Grenoble Alpes, France.
Featured image credit: CC BY-NC-ND 2.0 by Alby Headrick
A quick look at the back catalogue of the EuroScientist provides an illustration of the wide range of issues that affect the working lives of scientists today. Previous articles have covered research evaluation, the open access movement, career structures and responsible innovation, among many others. These issues are often dealt with individually—and rightly so given their complexity. But considered as a whole, they help to make up a culture. And scientists must work within this culture to do what they set out to do: usually, to produce high quality, ethical research that is of benefit to society.
There could be several ways of interpreting what is meant by ‘high quality, ethical research’. Guidance documents on research conduct and integrity that have been produced by most governments and science organisations across Europe, as well as pan-European organisations, do just that.
Among these documents, there is broad agreement that scientific research should observe certain principles including honesty, rigour, openness, fairness and duty of care. So it seems that we are fairly clear about what good, ethical science should look like. However, science does not operate in a vacuum free from influence, and the products of scientific research are affected by the choices and actions of scientists themselves, and the policies of research institutions, funders and regulators.
Each of these actors has developed ways of operating in order to deal with the realities of the system. For example, research institutions and research funders have limited resources and must prioritise and reward some research and researchers over others. And scientific publishers want to build a reputation for communicating high quality, interesting science in order to attract people to their services. Therefore, they also must have systems of evaluation in place.
We may be clear about what good science is, but it is no easy task to decide which science is better that others.
Nevertheless, systems of assessment have developed over time by necessity, and these dictate what research and which researchers receive support and encouragement. Given these high stakes, it would not be surprising if researchers altered their behaviour in order to give them the best chance of receiving this support and encouragement.
An important question to ask is, therefore, does the system operate in such a way that supports and encourages the production of what we broadly agree is ‘good science’?
This is a question being explored by a group of science organisations in the UK this year as part of a project on the culture of scientific research. These include the Nuffield Council on Bioethics, the Royal Society, the Society of Biology, the Institute of Physics and the Royal Society of Chemistry. Through an online survey and a series of events, the group will be gathering views and evidence and promoting debate about the culture of research in the UK and its effect on ethical conduct in science and the quality, value and accessibility of research.
Various parts of the system are already under close scrutiny in the country. Universities are currently competing for a share of the Government’s research budget from 2015 onwards. And the chosen metrics of assessment—namely, research outputs, research impact and research environment—have been the subject of intense debate.
Across Europe, scientists are grappling with changes to academic publishing models and to legislation on areas relevant to their work, such as data sharing and animal research.
This UK project hopes to start a conversation and provide some evidence for future policy making about how these factors work together to influence the culture of science and whether it is likely to the desired outcome: high quality, ethical research.
The findings of the project will be published towards the end of 2014. Watch this space!
Catherine Joynson, programme Manager, Nuffield Council on Bioethics, London, UK.
Featured image credit: enotmaks via Fotolia
History reveals a succession of many dawns and twilights, in different facets of human activity. Looking at the past, we can date and understand the reasons for the birth of science, specifically fundamental science. However, we do not know precisely when its twilight will take place. Nevertheless, clues of the advent of such twilight are already in the air; after a very hot summer the season of falling leaves always comes. This article presents the underlying rationale suggesting that we are now past the golden age of pure science, and how we need to accommodate our research to this new era.
Symptoms of decline
Today, science and some of its priests enjoy a high status in our society. By science this article refers to pure sciences, as distinct from applied sciences. We have witnessed gargantuan amounts of money invested to support such sciences. The quantity of publications, the quantity of big instruments and the technology created, the number of jobs created in research, the accurate control of our science in comparison with past times, could all be considered as arguments used to show that science is presently living in a golden age.
But there are some symptoms which indicate a decline of our scientific culture. First, our society is drowned in huge amounts of knowledge. Most of it is about research of little importance to progress our world view or produces no advances in the basic fundamentals of pure science. Instead, we invent countless technical applications or investigate secondary details.
Second, in the few fields where some important aspects of unsolved questions have arisen, powerful groups of administrators of science control the flow of information. They have inherent biases resulting in a preference for consensus truths, rather than having objective discussions within a scientific methodology. This process gives few guarantees that we are obtaining solid new truths about nature.
Finally, should the current scientific process continue the way it is, individual creativity is condemned to disappear. Indeed, truly creative scientists are substituted by large corporations of administrators and politicians of science specialised in searching ways of getting money from States in megaprojects with increasing costs and diminishing returns.
A hive without a soul
In essence, our science has become an animal without a soul. Or rather a colony of animals, a group of organisms, which devour human efforts and do not offer anything but growth for the sake of growth. Scientific organisations behave like a colony of bacteria which reproduce as far as the available food and money allow. The more you feed them, the more they grow: more PhD students, postdocs, staff researchers, papers, supercomputers, telescopes, particle accelerators etc. And, if the money tap is closed, the number of people who dedicate their time to science and its by-products is reduced proportionally.
Almost everything in science is reduced to find a small fiefdom of nature to analyse—regardless of the existence any fundamental question to solve there. The whole process boils down to publishing papers on such fiefdoms, getting citations from colleagues with the aim of getting jobs and extra money for expenses, getting money to employ more PhD students, postdocs, etc. And when these students and postdocs grow up, they become new senior researchers who ask for more money, and so on… The sense of all this industry is one of primitive life: just a struggle for survival and spreading intellectual genes.
The business of science in crisis
It is not only a crisis of senses and spirit. But it will also be a crisis in the business of science, at least for pure sciences—not necessarily for technological applications. Scientific institutions follow the structure of capitalism, so they must continuously grow. Experimental science becomes more and more expensive with time, and science has opted for this way of no return, going always for an increase in funds. When the investment in science reaches the limit where it can no longer grow, a crisis will become unavoidable.
Nowadays, the richest countries aim to invest around 3% of GDP in research and development, from which 20% is dedicated to pure sciences. This ratio is much higher than in the past—both in absolute and relative terms—and it has grown continuously in the last few decades, with some small fluctuations. Such investment is, possibly, already close to the asymptotic limit in terms of the relative ratio of money that a society can afford. So an economic crisis in science may be not very far away. It could well be that many research centres will continue for some decades with a constant or decreasing budget. But eventually they will recognise that no advances can be made without increasing budgets. Then, these centres will begin to close, one after another.
This will not happen very fast. But it will be a process possibly lasting several generations. And this decline will not only affect science but the sinking of science will run parallel to the sinking of many other aspects of our civilisation. The end of science will mean the end of modern European culture, the twilight of an era initiated in Europe around the fifteenth century and which is extended nowadays throughout the world: the scientific age.
The end of the golden age of science
Are we not wise enough to stop this decline? No, we are not. We have plenty of cumulative knowledge. But memory is neither intelligence nor wisdom. Humans are individually intelligent, but when they associate in big groups this intelligence is diluted. For instance, global warming cannot be halted, due to this collective stupidity. In my opinion, the fate of our civilisation cannot be changed. “It is not the individual but the spirit of a culture who gets fed up,” said early XX century philosopher of history Oswald Spengler in his book ‘The Decline of the West.’ Whatever has to happen, will happen. Societies develop their cultures, and they grow, reproduce and die.
The golden age of science will never come again. But we could, at least, try to preserve something of the spirit of science, in which the best intelligences can produce smart solutions to various problems.
Thinking about new ideas with low-budget experiments or intellectual developments produced by few individuals has more merit than the mega-expensive macro-projects of big science. Many scientists might, possibly, complain about this statement and say: “With a low budget, we cannot create innovative science.” And the answer should be equally firm: “If you cannot produce new ideas or new analyses of available data in science, and your only idea of advance is to ask for more money for a device more expensive than the previous one, then the only option left is to leave research.”
Martín López Corredoira, staff researcher at the Instituto de Astrofísica de Canarias, Tenerife, Spain
Featured image credit: Robert Kneschke via Fotolia
The SOM (Society Opinion Media) Institute at the University of Gothenburg conducts annual surveys of the Swedish public. It explores, among other things, media consumption, confidence in societal institutions and different professional groups. Since 2002, an independent and influential Swedish non-profit membership organisation that works to promote dialogue and openness between researchers and the public called Vetenskap & Allmänhet—which stands for Public and Science—has added a section to the SOM survey to study public confidence in science and scientists.
The first study, which examines the hypothesis that media reports of research misconduct will have an impact on public confidence in science and scientists, has been performed in cooperation with the University of Gothenburg, in Sweden.
This study involves comparing the SOM survey results to the number of articles on research misconduct published between 2002 and 2013. Media included in the study are the nine largest Swedish newspapers, including four tabloids, and a news programme on Swedish public service TV.
A total of 356 news reports on research misconduct were coded for the period 1 January 2002 to 31 December 2013. There is a large fluctuation in the number of published news items on research misconduct from one year to the next. And, in general, the number does not seem to be on the rise. If anything, there has been a slight decrease in the number of research misconduct articles during the last two years of the study. Only 16 % of the articles were published by tabloids.
Medical research is by far the most frequently reported area for research misconduct; more than half of all published articles concern this field of research. As medical research is an area of high public interest and concern, misconduct within this discipline may generally be considered more newsworthy than misconduct in other research areas.
When comparing the number of published articles on research misconduct with public confidence in science and scientists, the increased reporting of misconduct in medical research during 2005 and 2008 appears to correspond to decreased public confidence in medical science and scientists (see Figure below).
A similar pattern can be seen for the humanities in 2005 and to a certain extent for the social sciences in 2005 and possibly also in 2011; for technical reasons, the confidence data of the latter year may not be entirely reliable, and should thus be interpreted with caution.
The study also implies a strong relationship between media consumption and confidence in science and scientists. Regular readers of a morning paper–who read it at least three days per week–have more confidence in science and scientists than those who do not read a morning paper on a regular basis.
The data of the study is currently being analysed in more detail. The results will be presented and discussed during a session named Fifty Shades of Deceit – transparency, accountability and public perception of research misconduct at ESOF 2014.
Fredrik Brounéus, Project and Communications Manager, Vetenskap & Allmänhet, Stockholm, Sweden.
Karin Larsdotter, Research and Project Manager, Vetenskap & Allmänhet, Stockholm, Sweden.
Ulrika Andersson, Research Administrator at the SOM Institute, University of Gothenburg, Sweden.
Maria Lindholm, Director of Research, Vetenskap & Allmänhet, Stockholm, Sweden.
Featured image credit: puckillustrations via Fotolia
A Danish research project on the so-called Nordic diet has raised concern about new trends in the way science is being communicated to the wider public, through untimely PR campaigns. The example of the OPUS Research Centre at University of Copenhagen, Denmark, stands out. This centre aims to investigate whether public health is likely to improve in Denmark, by renewing the Danish culinary culture. The idea is to develop a more healthy, tasty and sustainable New Nordic Diet (NND) for the Danish public. In principle, this research has potential to identify suitable food that has been sourced locally. Nothing wrong with such a noble approach, at first glance. However, since its creation the OPUS Research Centre played on a Nordic pride and promoted NND as a very healthy diet with a lot of benefits. The trouble is that it started its promotional activities before any research findings had been published.
The concept of NND has been defined by a group of chefs as healthy food with local characteristics and cultural history. This means, for example, that tomatoes, olive oil and other types of food, which are not easily produced in the Nordic countries, are not part of this diet. The philosophy of creating a new food tradition based on local products also appeals to local food producers. And there is a large economic interest in the promotion of local products. The concept of NND is based on entrepreneurial chefs with a high media profile and strong interests in promoting the ‘New Nordic’ vision. One of these food entrepreneurs, Claus Meyer is represented on the board of the OPUS Research Centre.
Some Danish researchers have been raising criticism over this practice, based on wishful thinking. “It provoked me that OPUS was campaigning in the media on positive benefits of NND and clearly tapping from a public hype on a superior ‘Nordic way’ at the funding application stage. In competition with other projects OPUS succeeded with this carefully designed media strategy to get a €13 million grant from the Nordea Foundation [which is a large private foundation in Denmark],” says Poul Nissen, director of the PUMPkin Research Centre, the Centre for Membrane Pumps in Cells and Disease, at Aarhus University, Denmark.
Furthermore, he notes, the project included huge investments and marketing strategies devised by the food industry. “Research promoted and organised in this way will never admit to contradicting results. And it changes what should be a fair competition for research funding to a fight through PR and marketing. This is where science checks out,” Nissen adds.
Backlash of substance over PR
The OPUS Research Centre was very successful with its media promotion of NND until September 2013 when Arun Micheelsen, a researcher from the OPUS Centre, showed results that indicated problems with the concept of NND. Our consumer surveys showed that “it was too time-consuming to prepare the food, it was difficult to get the right ingredients and people [had the tendency] to change the recipes,” says Arun Micheelsen. In spite of the hype, only a very small group of consumers who tested it found it practical to change their diet to match the new concept.
These critical findings made newspaper headlines in Denmark when Arun Micheelsen declared that the Opus Centre had tried to modify the conclusions of his research. He had also received direct instructions on changes from the non-academic advisory board member, chef Claus Meyer. This resulted in a media storm where a number of prominent researchers raised critical voices against the practice of OPUS and its promotion of NND. Some even claimed that it was manipulation or fraud.
The response from University of Copenhagen was a professional social media campaign defending the activities of OPUS. It also produced a report saying that there was nothing legally wrong about the practice of the Centre. The OPUS Research Centre announced in a press release, in March 2014, that it would allocate more resources to communication, with more emphasis on production of articles, social media and traditional PR.
The OPUS example shows how fashion in research funding and reward structures are streamlining the scientific agendas and undermining traditional academic norms. In the article “Science Bubbles ,” published in November 2013 in the journal Philosophy and Technology, Vincent Hendricks, professor of philosophy, at the University of Copenhagen, and his postdoctoral student David Budtz Pedersen traced the mechanisms of overselling research results.
Their article point to the massive emphasis on the societal relevance of research, during the past decade, as being one factor that may discourage researchers from publishing negative results. “Numerous international studies of research management document how universities have set up financial incentives and reward systems to encourage researchers to publish in high-impact journals on popular topics that generate research funding, ” David Budtz Pedersen points out. “This means that researchers often will have very little interest in spending time on problems that break away from mainstream or do not lead to publishable results. And they will tend to dress their research claims up in ways that appeal to policy makers and evaluators,” he adds.
Science communication experts are concerned about these trends too. One of them shares his scepticism. “Science which camouflages as PR or vice-versa inevitably erodes the trust laypeople put into academic institutions,” says Alexander Gerber, professor of international science communication at Rhine-Waal University in Kleve, Germany, who specialises in the socio-political and transcultural aspects of science communication. He adds: ”empirical communication research has unambiguously shown that PR campaigns in science do not change public attitudes and people’s behaviour as intended, if they only cater to the outdated idea of disseminating presumed ‘truths’ in a marketing-manner.”
He also believes that a greater issue is at stakes. “It would be dreadful if European institutions fell back into the dissemination deadlock,” he says, and concludes : “ This certainly is a major challenge for Horizon 2020, as long as research projects are mainly required to disseminate [research findings] instead of transparently addressing the real ethical, legal, and social implications of their work.”
Featured image credit: CC BY-NC 2.0 by Alex Berger
Jens Degett Jens Degett is a freelance journalist based in Copenhagen, Denmark.
The Incredible Shrinking Man is a speculative project that investigates the implications of downsizing the human species to better address the demands on the Earth. It has been a long established trend for people to grow taller. As a direct result we need more resources, more food, more energy and more space. At the dawn of agriculture about 10,000 years ago, an estimated 5 million people lived on Earth. Today that number stands at well over 7 billion. And the pressure of human omnipresence on Earth is increasing. Every day, alarming scientific papers are warning us of imminent doom, due to overexploitation of our environment.
But what if we used our increasing scientific knowledge of the factors that contribute to human height to shrink mankind down to 50 centimetres? One hypothesis is that it would alleviate the resource pressure on the planet. Four years ago I decided that this seemingly absurd question was worth investigating.
Although I am not insensitive to the project’s metaphorical qualities in times of ecological, social and economic crisis, I was much more interested in researching if and how such shrinking could really be achieved. For reasons outlined below—that I only understood later myself—the Incredible Shrinking Man gained the attention of the art/science community.
Because of the initial seemingly absurd nature of the question I needed to start doing my own scientific experiments with zebra fish. I therefore cooperated with scientists to acquire the knowledge necessary to underpin my arguments. Regardless, the project mirrors one of my personal interests and concerns with the position of science within society. Especially, I worry about the notion of autonomy in research. What does science want?
Science is good at measuring the destructive force of man’s presence on Earth, in the same way it has been good at creating the visions and the tools for the destruction itself. Unfortunately, in this crucial time for mankind, it has not shown to be equally powerful at harnessing a vision that inspires mankind to change its destructive paradigms. And that’s understandable: figuring out how things work just is not the same as knowing how to use it. And yet, more than ever before in the history of man does humanity need science to independently put forward its vision on how we are going to deal with the incredible challenges we are facing.
To do so, perhaps, it is necessary to learn from that another important expression of humanity’s desire for knowledge: art.
When I decided to become an artist, now a little over 5 years ago, the first rather frightening thing I realised was that from that moment onwards I represented nothing but myself. There was no language, no institution, no displaced responsibility I could hide behind. If I said something, I said it. If I desired something, I desired it. But I also realised that society can no longer afford to condemn such notions to the realm of art and romanticise them.
Faced with today’s societal challenges, we need the scientists to become artists. And to not confuse art with innovation or creativity, or all those other reasons why the marriage of art and science is so often desired but not understood. Instead, science needs to take responsibility for a vision of the future. Science should attain public visibility. And scientists should be more outspoken in a public way about their desires in order to initiate a debate.
To most readers the Incredible Shrinking Man, perhaps, is not that vision. But I would like to argue that at least it is an attempt to direct knowledge in such a way that it enables humanity in general and, perhaps, science in particular, to take responsibility for the fate of our existence.
Let’s shrink into abundance.
Arne Hendriks, artist, Amsterdam, Netherlands
Featured image credit: Arne Hendriks