Fake news: unobservant audiences are easily swayed

Fake news: unobservant audiences are easily swayed

How the public responded to a post referring to non-genetically related identical twins

Fake news is no longer easy to identify by readers and consumers online. Numerous factors help explain it: sophisticated automation software, clever economic drivers competing for attention, the list continues. Producers of unreliable information are not held accountable to ethical regulation or journalistic integrity. Fake content appears in a language which exploits tentative truth and suspected veracity. Opinions are delivered in vague language thus Readers need to be wary of the moment such opinions are relayed by media outlets.

The competition that news organisation stage to get an even greater slice of the online audience govern our actions. With a highly effective and visible digital podium from which to speak, content managers, bloggers, your web content ‘produsers’–notice the uniqueness of that word in spelling–fuel an incredible money making system. And since anyone can now produce content online, what’s stopping someone from churning out their very own content … even fake content?

This article presents a recent experiment focusing on spreading fake information, which was deliberately fabricated to test the general public’s response to such falsification. In this context, the media has a role to play in fending off such misleading information.

Deliberate fake news posting

As part of an independent research project, I set out to demonstrate to audiences consuming information online just how easy it is to fool and deceive digital readers. The study monitored participants who were not aware of the study, between the 16th March and the 4th April 2017. It consisted in releasing a photo to the public, via Instagram, with a statement overlaying the image: “Non-genetically related identical twins are the cutest on the planet.” This statement relied on the well-known clickbait word “cute” to get audiences to only pay attention to the image and to respond to it in an emotive way, in an open, social media space.

I first selected a stock photo from the Pixabay.com Creative Commons. A female photo, mid-20s, was selected as a fake profile user with the qualities fitting the demographic descriptions most commonly found of active users on Instagram; this was according to Pew research. I then added a profile description showing interest in media and science. The account included 8 photos in total, with one other showing a similar likeness to the test photo with the lie. The photo with the lie was shown in the first-most position on the profile; it was also, the only photo with any writing overlaying the image.

I then distributed the image using free or low cost automation software sites, typically used by content marketers in a digital sphere. In addition, I relied on commercially-available content management systems such as Hootsuite, Buzzsumo, Peerboost, Instagress, and Archie, also helped monitoring the progress of the content online. These helped me filter audiences members by interest and location. I selected for this study Irish, social media users seeking information on Instagram using the hashtag (#) Science, Research, Discover, ScienceNews, and FakeNews.

Thanks to automation tools, Peerboost, Instagress, and Archie I used a pre-setting of comments which would rotate and repurpose themselves online. I programmed comments to be sent randomly ranging anywhere from positive statements (Cool, post) and calls to action for further sharing (Check this out, You won’t believe it…) as well as bold statements (Non-genetically related identical twins have a cancer link… what’s this about?). Each comment had a hyperlink to bring curious individuals to the fake profile for examination.

Probing results

Within 3 weeks the automation software, I recorded the profile and the audience data. The photo with the lie attached was visible to 6,610 people on Instagram. And 535 people chose to follow the fake profile. 132 people, clicked on the photo with the lie and “liked” it (a “like” is as defined by Instagram usage). The photo received the most interaction, an average increase in engagement 3-4%. 8 people positively commented on the photo. 3 people in total of the 6,610, questioned the validity of the automation software of randomly-issued comments, with 1 person responding, “Are you like a bot or a real person even?”

These results show automation software works if comments mimic the conditions of users commenting online. This would mean, in any of your comment sections of YouTube, Reddit, Twitter, any of them, a large percentage of those comments were randomly generated. They were pre-set by a content manager with a purpose of causing engagement. Why? Perhaps it was to draw attention back to the commenters’ website.

Platform imperialism

Researcher Dal Yong Jin from Simon Fraser University in Vancouver, Canada published a study on the most used Internet platforms and found, “98% of them were run by for-profit organisations, 88% used targeted advertising, 72% had their home base in the USA, 17% in China, 3% in Japan, 4% in Russia, 2% in the UK, 1% in Brazil, and 1% in France. He concluded that there is a “platform imperialism”, in which “the current state of platform development implies a technological domination of U.S.-based companies that have greatly influenced the majority of people and countries.”

Traditional media outlets need to be held accountable for their contributions in aiding this problem. While competing for attention in a digital sphere, traditional media are changing their operating mechanisms and ethical standards by which to report. Audiences don’t need more and more articles telling them “Look out, we’re under attack! It’s fake news! Run for your digital lives!” This cause for alarm heightens anxiety, and eventually reduces scrutiny of fake content due to habituated language use.

Instead, media sources need to use their warnings and headers effectively. This is especially true when it comes to communicating science content and technical, complex information. When audiences have the internet yelling alerts day-in-day-out their attention-antennas wane. Worse, feelings of polarisation and a hyper-paranoid state of mind can develop and halt progress in its track for a resolution in a world of mixed truthfulness and falsehood.

Mel Hoover

Mel is a science communicator based in Dublin, Ireland

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.