How cognitive mechanism make us believe in fake news
Five years ago, the World Economic Forum declared that the spread of misinformation through social media was one of the greatest global risks to our future and prosperity. At that time, the future scale of the threat was still unclear, even to media experts. However, for anybody with the slightest doubt about how rapidly social networks are changing news consumption and its effects, last year was eye-opening and overwhelming.
Misinformation and fake news have influenced every major voting process and strengthened science-denial movements — consider how ubiquitous anti-vaccine and climate change scepticism propaganda is. But what are the consequences and the remedies to this? On the 29th of June, this and other questions are the subject of discussion in the ‘Science journalism in a post-truth world’ session of the 4th European Conference for Science Journalists (ECSJ2017).
Misinformation was not an unknown concept in 20th-century media, but the Internet is proving to be an ideal platform for spreading, multiplying and uncritically consuming unsubstantiated ‘facts’. In the face of this massive free-flow of information, empirical evidence is easily lost. Debunking misinformation has become a burning issue, but fighting against fake news is anything but easy. Discovering the mechanisms behind this phenomenon is obviously the first step.
According to cognitive scientists, the first and biggest problem with debunking misinformation is that it continues to influence judgements and conclusions even after correction. Researchers first fully described this phenomenon in 1994 and dubbed it the ‘continued influence effect’. Hollyn Johnson and Colleen Seifert, of the University of Michigan in the United States, conducted a research project in which scientists exposed subjects to a story about a warehouse fire that included misinformation, and they then manipulated the timing of the presentation of the corrected version. Johnson and Seifert concluded that we cannot simply erase wrong information from our memory just by saying it is untrue, even if this correction is presented immediately after consumption.
In the last ten years, many similar studies contributed to unlocking the complex cognitive mechanism behind this ‘continued influence effect’. Stephan Lewandowsky, of the University of Bristol in the UK, is one of the most prominent experts in this field. Lewandowsky’s The Debunking Handbook, co-written with John Cook, summarises the main backfire effects that can occur during correction attempts. For example, an ’overkill backfire effect‘ can happen if we use too many complicated arguments; information that is easy to process is more acceptable as true. Debunking also depends on pre-existing beliefs, and on cognitive processes such as confirmation bias, the tendency to selectively search for information which confirms one’s beliefs.
The most puzzling backfire effect involves reinforcing misinformation’s familiarity by referencing it during a correction attempt. This perplexes scientists. It is difficult to debunk anything without mentioning the original error, and therein lies the problem: When people use automatic, simpler ways of memory retrieval, it is easier to accept false information as true because it seems familiar. Here, the art of debunking becomes more complicated. Although some research shows that such debunking attempts reinforce misinformation as the ‘truth’, in two 2017 experimental studies, Lewandowsky, Ullrich Ecker and Briony Swire found no evidence of a true ‘familiarity backfire effect’.
Lewandowsky explains that familiarity is probably overpowered when we engage our more complex, strategic memory processes; these allow us to remember details such as where, when and how we learned the information. However, strategic memory could easily fail if we are distracted by something or unwilling or unable to exert ourselves mentally. Therefore, even if mentioning the misinformation in the process of debunking does not always lead to negative results, it is better to avoid it as a precaution and focus on the facts. “Processing represents a balance between familiarity-based responding and strategic processing, and when the former outweighs the latter, then you would expect a familiarity-based backfire effect,” explains Lewandowsky.
In their most recent research published in the journal PLOS One, Cook, Lewandowsky and Ecker also confirmed that people can be ’vaccinated‘ against misinformation with different types of prior warnings. For example, The well-known phenomenon of false-balance media coverage, which decreases acceptance of science facts, could be successfully neutralised with a prior warning about misleading content.
It is also important to point out that debunking in the digital age may come with additional ‘online traps’. As Lewandowsky warns, everything online looks equally credible, information spreads so much faster and every opinion will find confirmation somewhere on the Internet.
In a continuing effort to understand the mechanisms behind misinformation spreading on the Internet, Walter Quattrociocchi, of the IMT School for Advanced Studies Lucca in Italy, worked with colleagues to analyse thousands of posts and user interactions on public Facebook pages. They divided these into two categories – scientific and conspiratorial content. Their findings from last year, published in the Proceedings of the National Academy of Sciences of the United States of America (PNAS), clearly showed the existence of ‘echo chambers’ – polarised communities of users who mostly select and share content related to a specific narrative and tend to ignore the rest. They concluded that confirmation bias fuels these echo chambers, which in turn promote content diffusion on Facebook. But while genuine science news spreads quickly to start with, results show that conspiracy rumours have a much longer active lifespan on Facebook.
In their most recent research, published this year in PNAS, Quattrociocchi and his colleagues explored news consumption by analysing 920 news outlets and 376 million users. They concluded that users consuming news on Facebook limit their focus to only a few sites. Despite the large number of available news sources, major segregation and growing polarisation are major factors in online news consumption.
“Online discussion negatively influences users’ emotions and intensifies polarisation, creating ‘echo chambers’ where beliefs become reinforced,” says Quattrociocchi. “With users on social media aiming to maximise the number of likes, information is frequently oversimplified,” he adds, explaining that simplification and segregation provide an ideal environment in which misinformation can spread.
This kind of polarised online environment is also quite resistant to debunking efforts. Research reveals that after exposure to debunking posts, conspiracy fans retain, or even increase, their engagement within the conspiracy echo chamber.
“The truth is a difficult concept, while the way of presenting science is almost religiously formalistic,” explains Quattrociocchi. “At the same time it’s difficult to rely on findings if you are not able to distinguish between correlation and causation. Dealing with phenomena like immigration or economic changes requires concepts like probability or uncertainty, which most people are not familiar with. We get lost in a storytelling that is missing the basic ingredients for understanding reality,” he says.
Confirming the importance of unlocking the cognitive mechanism behind misinformation influence, David Budtz Pedersen, co-director of the Humanomics Research Centre in Copenhagen, Denmark, points out that the changing informational culture has become an effective political tool. Budtz Pederson concludes: “If you are able to create a polarization between ‘believers’ and ‘non-believers’, you have a strong political platform. New research from Harvard University suggests, that fake news does not stick in the mind. An average American voter can only recall 1.4 fake news [items], which suggests that fake news itself does not change political systems. However, the underlying cognitive effect of cumulated fake news and misinformation is political polarization and echo chambers.” Budtz Pederson warns that “This can have severe damaging effects on democratic deliberation.”
In this fast changing world of social media, science journalists are the gatekeepers, says Volker Stollorz, chief editor of the Science Media Center in Germany. Therefore, he warns, it is useful for journalists to understand more deeply how the pollution of the science communication environment came about. “Science journalists have been the first profession to discover that identity preserving cognition can’t be changed by providing just the facts and more knowledge. Whenever a scientific issue gets polarized and tangled along party lines [by] merchants of doubt with vested interests, identity preserving cognition kicks in and can block inconvenient facts. We have to learn how we can do better storytelling that accepts cognitive biases and adapt communication so that people can develop trust in journalism,” concludes Volker Stollorz.
Vedrana Simičević
Reprinted with the kind permission from the European Conference for Science Journalists 2017 (ECSJ2017) held in Copenhagen between 26 and 30 June 2017.
Featured image credit: Kayla Velasquez via Unsplash
Go back to the Special Issue: ECSJ2017