ThinkProgress

Where are they now? The Russian bots that disrupted the 2016 election

CREDIT: Diana Ofosu/ThinkProgress

“There are many different truths. There has to be a pluralism of truth.”

That’s what Russian propaganda site Sputnik’s Paris-based editor in chief Nataliya Novikova told the New York Times when asked about false stories published about France’s presidential contender Emmanuel Macron.

Novikova’s pluralist truth, however, is actually a meld of unsubstantiated media reports designed to disrupt France’s electoral process.

In February, Sputnik quoted Nicolas Dhuicq, a Republican French Parliament member who belongs to the Kremlin-friendly French-Russian Dialogue Association, who called Macron a “very rich gay lobby.”

The story took off on social media, prompting the 39-year old former high school teacher to publicly laugh off the rumors during a rally in Paris.

“If in dinner-party chatter, or in forwarded emails, you’re told that I have a double life,” he added, taking a dig at a competitor, “it’s my hologram that suddenly escaped, but it can’t be me!”

Why did a rumor about a French presidential candidate’s sexual orientation take flight despite the country’s sexually liberated reputation? Blame it on the bots: fake online accounts programmed to post messages automatically and mimic a real person’s identity.

“All of us are the target for disinformation.”

By now, we know these fake accounts helped influence the latest U.S. presidential election. Automated bot accounts made up 18 percent of Twitter’s traffic related to the 2016 election, according to researchers, and these bots overwhelmingly pushed messages in favor of now President Donald Trump. At least a third of pro-Trump tweets during the election came from bots, and half of Trump’s most engaged Twitter followers are bots.

Seventeen U.S. intelligence agencies agreed that Russia was behind several hacking incidents, including the infamous email breach of the Democratic National Committee last year that former presidential candidate Hillary Clinton blames for her electoral loss. Hacking, however, was only part of the equation. The use of social media bots to spread fake news was part of a larger disinformation campaign to help Trump get elected.

But now that the United States’ election is over, where are they?

Since November, bot activity targeting the U.S. has lulled, according to Ben Nimmo, the Atlantic Council’s information defense fellow, with clusters of botnets pivoting to European elections in the Netherlands, Germany, and particularly France.

In April, for example, Facebook shut down more than 30,000 French accounts that were spreading fake news on the social network about French presidential hopeful Marine Le Pen, who is often referred to as France’s Donald Trump. Like Trump, Le Pen’s campaign has been bolstered by an energetic base and a small online army.

As Le Pen solidified her place as a contender for France’s next president in the country’s first round of voting April 23, bots that formerly tweeted to support Trump’s presidency started tweeting for Le Pen. Twitter, from anti-Trump activists to conservative blogs to media organizations, took note.

When they’re used systematically, bots can have international political consequences and can even be considered weapons. “Five thousand bots, all of which tweet the same thing at the same time to distort the information space, that’s a problem,” Nimmo said. “In a sense they are defensive accounts in that what they’re doing a lot is sheltering Russia.”

That’s Russia’s tell: bots usually revolve around issues that at least tangentially concern the country’s image or interests.

It’s difficult to pinpoint who’s behind the bots with absolute certainty, Nimmo said, but researchers have tied many of the pro-Trump bots to Russia because of their political bent and their sourcing to known disinformation sites based out of Russia, such as Sputnik or RT.

Bots and online rumors can have serious consequences. A North Carolina man recently pleaded guilty for brandishing a firearm in a D.C. pizza parlor last year because he believed a conspiracy theory circulating online that Hillary Clinton was running a child sex ring out of the restaurant.

The incident added gasoline to fiery populist sentiments and nationalist rhetoric preceding Trump’s election, but it wasn’t the first time bots have intersected with politics. During the special election to fill Massachusetts Sen. Ted Kennedy’s seat in 2010, researchers found that an Iowa-based Republican group used bots to smear Democratic candidate Martha Coakley.

“The Russians have been doing this…to sow discord and distrust in democracy, democratic institutions and the government.”

The spread of false stories also laid the groundwork for a volunteering nurse to be quarantined during the Ebola crisis in 2014. Kaci Hickox was detained and repeatedly tested upon her return to New Jersey from volunteering in West Africa. Republican Gov. Chris Christie reportedly said that she was undoubtedly ill even though she didn’t present with symptoms. The ordeal got Hickox evicted from her Maine home, and led Maine’s Gov. Paul LePage to request she be quarantined again as residents called police convinced by fake articles that she was a public health risk.

But for elections like Trump’s and potentially Le Pen’s, victory for a certain candidate isn’t the end goal for political bots. The ultimate goal is chaos: a sense that even the strongest, most democratic nations can be destabilized.

What does a bot attack look like?

Russia isn’t new to spreading fake news. In fact, this tactic stretches back decades.

“The Russians have, back into Soviet days, sought to use information operations and warfare,” said Max Bergmann, former State Department policy analyst and senior fellow on European security and U.S.-Russia relations for the Center for American Progress.

Over the years, Soviet intelligence officials have spread rumors that U.S. spy agencies orchestrated Dr. Martin Luther King Jr.’s assassination and later that the CIA created the AIDS virus at Fort Detrick in Maryland.

Before social media, however, it was harder for Russian operatives to disseminate their messages.

“It was more difficult, there were fewer media outlets that you could use. Social media didn’t exist, you had to get things in print and then distribute it, but that cost a lot of money and you had to have people on the ground,” Bergmann said. “With the internet, social media, and people getting their news from social media, it’s a whole new information landscape.”

Adam Peck/ThinkProgress

Now, modern technology and media techniques have supercharged traditional disinformation campaigns. Instead of printing pamphlets one at a time, bots can pump out thousands of links almost orchestral, artificially dominating timelines.

Political bots come in two forms: cyborg and botnets. Cyborg accounts have more of a human presence, occasionally posting original content. That ranges from acerbic messages about the state of a nation or rude insults to liberal competitors to hyperbolic statistics and links to fake news articles.

Bots can be a single account or small group of accounts that can suddenly become hyperactive and tweet 1,000 times a day. Botnets tend to be larger groups of accounts, typically in the hundreds and thousands, that tweet identical or near identical messages at the same time. Those messages flood hashtags to cause them to trend, which baits reactions from the public. Botnets can also amplify messages from cyborg accounts through re-tweets, follows, and other interactions.

For example, Bergmann said, a #HillaryWon hashtag could easily be flooded with pornography and claims that she’s a murderer, which effectively inhibits genuine social discourse.

The aftermath of a bot attack

The last 12 months have been dominated by talk of Russia and its involvement in trying to tip the outcome of the U.S. presidential election. Traces of Russia were found everywhere — from the contents of the Democratic National Committee’s email hacks, which served to amplify gossip about the Democratic Party’s dislike of Sen. Barry Sanders, to the now-resolved concerns that Russian hackers breached voting machines in several districts to ensure Trump’s victory.

The events led to the demand for answers, congressional hearings, and extensive intelligence reports to determine whether Russia was actually behind it all. The FBI also began investigating two U.S.-based news sites, Breitbart and Infowars, for their possible involvement in propagating Russian fake news.

But Russian bots aren’t limited to interfering in the United States’ politics.

“This is not something that is unique to the 2016 campaign,” Bergmann said. “The Russians have been doing this in Europe, quite effectively, to sow discord and distrust in democracy, democratic institutions and the government.”

In some ways, for Russia, the Cold War never ended. Now-defected Russian intelligence chief Sergey Tretyakov told author Pete Earley in his 2008 book Comrade J detailing his career with the KGB that “Nothing has changed. Russia is doing everything it can today to embarrass the U.S.”

The Russian-linked campaign to tip France’s presidential election mimicked that in the U.S. and went beyond supporting Le Pen by using a combination of bots, fake news, and hacking attempts to derail her biggest competitor independent centrist Emmanuel Macron.

“We basically have a situation since 2014 where the United States, Europe, and Russia have been on a very confrontational course.”

Bots circulated rumors throughout cyberspace and the Russian fake news circuit that Macron was a U.S. agent meddling in France’s finances, that he was gay, and that his campaign was funded by Saudi Arabia.

Macron’s campaign also suffered hacking attempts ahead of France’s first round of voting April 23 and another set of coordinated attacks Friday, just 36 hours ahead of the second round of voting set for May 7. No data was breached in the first attack but campaign documents, some fake, and email addresses were leaked in the second hack.

Investigators haven’t been able to definitively link Russia to the hacking methods, but say the attacks were similar to those against the DNC in the lead up to Trump’s election, which U.S. intelligence officials did attribute to Russia. Macron’s campaign said the attack was launched in order “to sow doubt and misinformation,” the New York Times reported.

Macron became a target of Russia’s online campaign because of his politics, namely his support of France staying in the EU. By supporting Le Pen, who wants France to leave the bloc, Russia can better make alliances with individual nations rather than several acting as one, Bergmann said. The EU is a reliable U.S. ally and threatens the Kremlin.

This three-way conflict between the U.S., the EU’s, and Russia’s interests goes back to 2014. During the Maidan Revolution in the Ukraine, where Kiev had to decide whether to join the European Union or Russia’s Eurasia bloc, and spurred Russia’s occupation in Crimea. That was a turning point in Russia-U.S. relations. The Obama administration issued economic sanctions, which caused a massive economic downturn in Russia. But instead of backing off, Putin doubled down, Bergmann said.

“We basically have a situation since 2014 where the United States, Europe, and Russia have been on a very confrontational course,” he said. “The U.S.’s hope was that by putting these sanctions in place, that would cause a real problem for Putin and Russian oligarchs and would prompt Russia to climb down and change their behavior. The result was actually the opposite.”

Trump’s flamboyant candidacy, and now Le Pen’s, presented an opportunity for the Kremlin to recreate the chaos that U.S. sanctions caused.

Adam Peck/ThinkProgress

In between the U.S. and French presidential elections, Nimmo noticed many of the Russian-associated accounts focused on other world events that affected the country’s interests.

“The pro-Russian hyperactive cyborg accounts that I have seen have been looking at Syria and back to the Ukraine,” he said.

Nimmo also observed that Russian bots tend to get involved in media narratives that might have implications for Russia. For instance, there was bot activity around the accusations in the media that Russia shot down Malaysia Airlines flight 17, which is “not meant to destabilize the West, but to distance the Kremlin from war crime accusations.”

Another example is bots’ response to the news reports that Russian and Syrian forces have targeted hospitals, attacks that have been labeled “war crimes.” White helmets are the first responders aiding victims of these attacks, and bot activity has been targeting their work, Nimmo said.

“A lot of the hostile activity, such as the attacks on the white helmets in Syria, has been driven very much by that concern over war crimes. When the white helmets documentary won the Oscar, there was a lot of hostile commentary from across the Russian spectrum,” Nimmo said.

That spectrum includes a host of Russian propaganda sites such as Sputnik and RT, which have published reports labeling white helmets as terrorists.

“In Russia’s view, it’s about convincing people that Russia’s right and the West is evil,” Nimmo said. “It’s about convincing people that the Ukrainian government is fascist, that MH17 was shot down by anything except Russian missiles, that sanctions are counterproductive, that the U.S. should agree with Russia and stop complaining about human rights, democracy, the international law of war and let Russia do what it wants to do.”

Fake news, real alliances

Even though Russia’s social media operation has now turned to Europe, the U.S. remains a target as long as government leaders maintain policies that hurt Russia’s economy.

By using Western ideals of free press and expression against themselves, as an example of how democracy is flawed, bots can sow widespread discord.

“That begins to degrade our ability to communicate with each other. It makes our discourse way more toxic, so we’re frequently more angry, more combative. That plays into Russian goals, which is to facilitate division within the United States,” Bergmann said.

“Even if Trump didn’t win…For the Russians, they won either way.”

It’s also a national security risk. Trump’s White House has a track record of elevating fake news as fact, which has serious consequences for foreign relations and public policy-making. Repeating fake news from the highest seat of U.S. government helps create an alliance with Russia’s agenda.

In his March testimony before the Senate Intelligence Committee, former FBI agent and counter-terrorism expert Clinton Watts said Trump and his campaign aides have repeatedly “parroted” Russian propaganda. In turn, the state-linked accounts tend to post at “high volumes” when they know Trump is online so that their conspiracy theories will gain traction.

“[Trump] denies the intel from the United States about Russia. He claimed that the election could be rigged; that was the number one theme pushed by RT, Sputnik news,” Watts said. “Until we get a firm basis on fact and fiction in our own country, get some agreement about the facts… we’re going to have a big problem.”

And when a rumor broadcast as fact is repeated by the highest elected government officials, it causes division among the population — some of whom want to believe it and others who know it’s false.

“Even if Trump didn’t win, you were still going to have a guy who during the last two months of the election was all about the election being a rigged system, you would still have this guy who got just under half of the popular vote with a huge platform that would be effective at undercutting a Clinton presidency,” Bergmann said. “For the Russians, they won either way.”

Russia’s disinformation campaign even covers U.S. financial markets, using fake stories to cause dips in stocks and make the U.S. economy seem as fragile as Russia’s, Watts said during his testimony.

“Russian propaganda sometime peddles false financial stories, causing rapid shifts in U.S. company stock prices that hurt consumer and investor confidence and open the way for predatory market manipulation and short selling. At times, U.S. business employees unwittingly engage with Russian social-media hecklers and honeypots putting themselves and their companies at risk,” he said.

‘Perhaps we need to be more skeptical’

Ultimately, bots and fake news are a means to an end. They’re a way for Russia to elevate politicians that will have friendly policies toward the country, and in the long run create discord among the public. That ties into Russia’s support of nationalist politics online, which, when combined with military force, can split allegiances like the European Union and convert once-democratic friendly countries to nationalist allies.

Fighting a decades-long campaign meant to dismantle perceptions of democratic nations won’t come easy, but Watts had a few ideas about what the U.S. could do to make Russia’s attempts less successful. “The Departments of Treasury and Commerce should immediately undertake an education campaign for U.S. businesses to help them thwart damaging, false claims and train their employees in spotting nefarious social-media operations that might compromise their information,” he suggested in his testimony.

Watts also said that the Department of Homeland Security could share “cybertrends” and hacking signature information with private companies to warn one another and stop attacks sooner.

But the most important line of defense, Watts said, will come from society.

“Russia’s social-media influence campaigns achieve great success because mainstream media outlets amplify the salacious claims coming from stolen information,” Watts said. “The world’s largest newspapers, cable-news channels and social-media companies could join in a pact vowing not to report on stolen information that amplified Russia’s influence campaigns.”

If recourse against disinformation campaigns is contingent with the public’s relationship with the media, it comes down to one question: How do you make Americans less susceptible to fake news?

“You could take away all of the clearly fake news tomorrow and I suspect that would not have any radical effect on the state of our politics.”

“[Mass] media needs to examine the ethics of rushing to publish stolen information they got from Wikileaks,” Bergmann said, emphasizing that the circumstances under which information is obtained matters is as much the public’s right to know as the information itself.

Social media companies also have a role. Companies such as Facebook came under fire for not doing more to stop the spread of fake news before the election. Since then, the major tech companies — Twitter, Google and Facebook — ramped up efforts with the latter partnering with 17 media organizations to stop the spread of fake news surrounding the French election. Facebook also recently announced its plans to stop governments and individuals from manipulating the platform.

Matthew Gentzkow, an economist, researcher and professor at Stanford University, said social media companies have a “strong incentive” to dispel fake news stories regardless of origin.

“I don’t think Facebook wants to be known as a place where one out of every 10 stories that you see come across your news feed is false. So how exactly they do that i think there are a bunch of different ways that they can try to identify those kinds of stories and make it harder for them to circulate widely and also make it less lucrative on the ad side to make it less lucrative for people to produce them,” he said.

But making sure consumers know where the news they’re seeing comes from won’t necessarily cure the problem, since people are more apt to believe things that align with their worldview.

“It’s not crazy to think that for some people fake new stories could have been very persuasive if you actually believed that Hillary Clinton ran some child sex ring in a pizzeria,” Gentzkow said, referring to the Comet Pizza scandal that caused a man to brandish a gun in a Washington, D.C. restaurant.

“If that was actually true, that would be pretty big news, and for a lot of people who might have otherwise voted for Hillary Clinton it probably could have changed their minds,” Gentzkow said. “The issue is those stories were much more likely to be seen and by people who already didn’t like Hillary Clinton and probably were not going to vote for her.”

The success of Russian propaganda through social media bots, fake news, and hashtag activism revealed a deeper problems in American society, ones that changes to technology and media practices alone can’t fix.

“You could take away all of the clearly fake news tomorrow and I suspect that would not have any radical effect on the state of our politics,” Gentzkow said. He noted that there are bigger issues “related to the crisis of trust in institutions and in media,” like the growing divide between which publications and sites each political faction trusts.

“As a public, we need to look at how we receive information. Perhaps we need to be more skeptical,” Bergmann agreed.

The U.S., along with Europe, is facing uncharted territory with a familiar enemy. Fake news in the digital age isn’t going away and neither is its success in destabilizing people’s trust and belief in their election processes.

The whack-a-mole nature of policing content on the internet makes eradicating fake news and malicious online political campaigns targeting elections impossible. And since Russia has a history of doubling down when the U.S. returns fire with sanctions or threats of military action, there’s only so much that can be done to deter the behavior.

“That’s the big problem. In terms of botnets, you can report a botnet and sometimes it will be suspended…But the political bots, they can keep going no matter how much they get reported and some cyborg accounts will change their name with one letter or number and just keep on going,” Nimmo said. “In terms of the websites, there’s very little that you can do.”

But there are still lessons we can take from Russia’s bots hovering over European and American elections.

According to Nimmo, you have to teach citizens how to spot a bot and how be more critical of information they see.

“All of us are the target for disinformation,” he said. “Look at the story, look at the statement. Is that person trying to comment on the evidence or are they insulting the witness? If they’re insulting the witness then that’s disinformation.”

Even if the information has nothing to do with Russia, he said, verifying the source of information is imperative because education is the only real defense against false information.

“You need to verify and if it’s genuine, it’s evidence” that a site or account repeatedly publishing false stories is unreliable, Nimmo said. “In the short-term, you’ve whacked one mole and there are loads more there. But what you’ve also shown people is this is how you do it.”

This post has been updated to include information regarding a second hacking attempt on French presidential candidate Emmanuel Macron’s campaign on May 5.