These days it seems as if there’s fake news at every turn. In January, the Republican Party handed out “Fake News Awards.” After the recent school shooting in Parkland, Florida, a fake story circulated that the teenage survivors now advocating for gun control were “crisis actors,” and research has shown that misinformation and conspiracy theories are deeply embedded in some of the most popular websites.
Now, however, a first-of-its-kind study has taken a broader look at the spread of fake news on social media, and the results are incredibly troubling.
In the new study, published Thursday, researchers at MIT analyzed 126,000 news stories posted on Twitter by three million English-speaking users over the last 10 years. They found that fake news stories spread faster, reach more people, and become more “embedded” in the social network than true stories.
According to the analysis, true stories take six times as long to reach 1,500 people than fake news, and 20 times as long if a false story has already been retweeted more than 10 times. Falsehoods were also found to be 70 percent more likely to retweeted than the truth.
There has been an extensive conversation about the role of bots in helping spread fake news across social media. These accounts, either partially or fully automated, have been a key tool in helping to spread misinformation and manipulate people. “Deceptive bots create the impression that there is a grassroots, positive, sustained human support for a certain candidate, cause, policy or idea,” the New Democrat Network said. “In doing so, they pose a real danger to the political and social fabric.”
Worryingly, however, the MIT researchers found that fake news spreads faster than the truth because humans, not bots, are more likely to retweet it, partly because the false rumors are designed to be novel and viral. For example, a fake story that the Pope endorsed Donald Trump racked up more nearly a million engagements on Facebook when it appeared online in November 2016.
“When information is novel, it is not only surprising, but also more valuable,” the study’s authors write. “[This is] both from an information theoretic perspective (in that it provides the greatest aid to decision-making)… and from a social perspective (in that it conveys social status on one that is ‘in the know’ or has access to unique ‘inside’ information).”
These results raised serious concern among professors and researchers. “The rise of fake news highlights the erosion of long-standing institutional bulwarks against misinformation in the internet age,” one group wrote in Science magazine. “A new system of safeguards is needed.”
But what those safeguards should be is anyone’s guess. In the last year, under increasing pressure from the U.S. government, tech companies have lurched from policy to policy in an effort to stop fake news from populating their platforms. Facebook has partnered with third-party fact-checkers and is blocking ads from appearing on fake news sites. In the wake of the recent Parkland school shooting, YouTube has started banning accounts that spread the rumors that the teenagers now advocating for gun control are “crisis actors.” Twitter CEO Jack Dorsey, meanwhile, recently appealed for outside help to deal with the platform’s fake news and harassment problem.
However, even if a one-size-fits all solution exists, the study makes it clear that the human propensity for salacious gossip will hinder any efforts to combat misinformation.
“I don’t think [the fake news problem] is because of Twitter and other social media platforms, but because of something that has always existed in human nature,” Soroush Vosoughi, one of the study’s lead researchers, told the Outline. “People like to pass gossip and rumors around. It’s just that it’s now amplified.”