Advertisement

Facebook’s war against fake news isn’t working

A new study proves that the social media giant's fact-checking tool has done little to combat the problem.

Facebook founder Mark Zuckerberg speaks in the Galileo Auditorium on Microsoft’s Silicon Valley Campus in Mountain View, Wednesday, Oct. 13, 2010. (AP Photo/Jeff Chiu)
Facebook founder Mark Zuckerberg speaks in the Galileo Auditorium on Microsoft’s Silicon Valley Campus in Mountain View, Wednesday, Oct. 13, 2010. (AP Photo/Jeff Chiu)

A new study by Yale University has found Facebook’s effort to combat fake news by fact-checking has had remarkably little impact — and in some cases, it can end up making certain fake stories go viral.

Facebook announced a roll out of its third-party fact-checking tool in December, after concerns the social media giant wasn’t doing enough to help stop the spread of fake news in the run-up to the 2016 election. The company is now working with five fact-checkers – ABC News, AP, FactCheck.org, Politifact and Snopes – to fight against misleading information.

Facebook’s algorithm searches for stories that seem false and then adds them to a queue of stories for fact-checkers to look at. If two of the groups find a story to be false, then it’s labelled with a badge marking the story as “disputed”.

However, the Yale study found that, by tagging an article with “Disputed by 3rd party fact-checkers”, participants were only 3.7 percent more likely to correctly judge whether a story was true.

Advertisement

“These results suggest that the currently deployed approaches are not nearly enough to undermine belief in fake news,” the study’s abstract read. “New (empirically supported) strategies are needed”.

What’s more, because of the sheer volume of misinformation online, Facebook’s fact-checkers only have the ability to flag a small number of stories. This means that the fake news stories which slip through the net are seen as undisputed. This “backfire effect” was particularly strong among Trump supporters and young people.

Gordon Pennycook and David G. Rand, who authored the study, said that these findings “suggest that tagging may do more harm than good”.

“The main potential benefit of the [fact-checking] tag is that it (slightly) increased belief in real news headlines. This seems insufficient, however, to stem the tide of false and misleading information circulating on social media.”

Fact-checkers have previously expressed frustration with Facebook’s efforts to combat fake news, saying the platform won’t give them data that will allow them to prioritize the most important and popular fake news stories – which Facebook says it won’t release because of privacy concerns. This also means that fact-checkers don’t get to see what effect their work has and whether their debunking makes a story more or less popular.

Advertisement

Facebook has since disputed the Yale study’s methodology and said fact checking was only one part of its effort to combat fake news. Other efforts include “disrupting financial incentives for spammers, building new products and helping people make more informed choices about the news they read, trust and share,” a spokesperson told Politico.

This latest study is just one in a series of revelations that show how aggressively Facebook has been manipulated over the past two years to inadvertently advance a right-wing agenda.

Last week, Facebook’s Chief Security Officer Alex Stamos admitted that 470 fake Russian accounts were used to purchase $100,000 in targeted ads. The majority of the ads didn’t specifically reference the presidential election but instead “appeared to focus on amplifying divisive social and political messages across the ideological spectrum – touching on topics from LGBT matters to race issues to immigration to gun rights.”

While 470 accounts and $100,000 may not sound like much in the era of billion-dollar elections, it can go surprisingly far on Facebook. The Daily Beast calculated that $100,000 could hypothetically allow a handful of ads to be seen nearly 17 million times. Russian operatives have even managed to remotely organize political protests, like an anti-Muslim rally in Idaho in 2016, using those fake accounts to promote them.

The number of Russian accounts Facebook has admitted to finding and shutting down seems suspiciously low, especially since it’s been previously reported that there were roughly 400 employees working for one Russian “troll farm” in St Petersburg alone. Regardless of the number, the tactics fit in with the Kremlin’s use of “hybrid warfare” which aims to use a “broad range of subversive instruments…to further Russian national interests.”

Chief among these instruments are communication strategies to help shape political narratives. “[Russian entities employ] large numbers of Internet trolls, bots and fake news farms,” The RAND Corporation’s Christopher Chivvis told the House of Representatives in March. “The objective of these information operations is primarily to muddy the waters and cast doubt upon objective truths.”

Advertisement

Facebook’s fledgling efforts to combat fake news, combined with the still-yet-unknown scope of Russian interference, shows the mountain that tech companies have to climb to prove that they are not facilitating fake news, or creating platforms for white supremacist hate speech.

After an attack at a white supremacist rally in Charlottesville, Virginia last month that left one counter-protester dead, Zuckerberg announced that there was “no place for hate in our community” and re-affirmed Facebook’s commitment to taking down any post that promotes or celebrates hate crimes. Platforms like Discord, Twitter and YouTube have also gone through major purges of far right.

The memes, videos, and fake news articles that were posted on Facebook have provided a playbook for white supremacists to continue growing their network on sites like Gab.ai, which has been described as a “digital safe space for the far right.” Moving forward, tech companies will likely be forced to issue preemptive strikes against that sort of content, in order to avoid Facebook’s fate.