Advertisement

Twitter and its users aren’t on the same page about what it means to be verified

When the company verified a prominent white nationalist, it showed how differently the public sees the coveted blue badge.

CREDIT: AP Photo/Richard Drew
CREDIT: AP Photo/Richard Drew

Twitter went on a tear following the election, suspending several accounts belonging to white nationalists who rose to prominence as Donald Trump clinched the presidency. The move seemed to be a sign that Twitter was turning a new leaf and punishing users, regardless of their celebrity or public interest, for breaking its terms of service.

But the social media platform reversed course over the weekend, reinstating formerly banned accounts — including those belonging to Richard Spencer, the self-proclaimed leader of the new wave white nationalist movement called the alt-right, and the National Policy Institute, which is known for promoting racist literature aligned with white identity politics.

And when Spencer and the other accounts came back online, they were adorned: Twitter verified the same accounts it banned for abusing its policies.

“The answer to bad speech is more speech.”

Vox reported that Spencer was booted off the platform for having too many accounts with “overlapping” issues, not for his racist rhetoric. To rejoin Twitter, all Spencer had to do was choose one account from which to tweet. He did, and then that account got verified.

Advertisement

So what gives? The fundamental issue here is that the way Twitter uses verified badges differs from how the public views them.

For users, verified badges are an emblem of legitimacy.

Verified users exist in their own stratosphere. Even for those with relatively few followers or unrecognizable names, the blue badge gives verified users celebrity status on a platform that has historically been viewed as democratic, giving every user’s voice a chance to be heard.

Until recently, the general public wasn’t allowed to request verification. The status makes it easy to recruit followers, and previously gave verified users access to abuse filtering tools now available to everyone. Verified users can filter notifications and responses to exclude non-verified accounts. Even journalists, when using tweets as part of their reporting more heavily rely on verified accounts at least partly because it’s assumed the user is who they claim to be and knows what they’re talking about.

From Twitter’s perspective, however, verification is a way for the platform to determine whether an account belongs to a person, group, or idea that people are interested in is at risk for impersonation.

Advertisement

Twitter wouldn’t discuss how that level of interest is determined — or whether the company believed that users viewed verification the same way.

In response to a request from ThinkProgress, Twitter declined to elaborate on its verification process beyond its public statement:

An account may be verified if it is determined to be an account of public interest. Typically this includes accounts maintained by users in music, acting, fashion, government, politics, religion, journalism, media, sports, business, and other key interest areas.

But even though Twitter may not acknowledge it, being verified means that what those users share — whatever their opinions may be — is going to be weighted more by Twitter’s algorithm and the public at large.

A 2014 study led by Microsoft researchers sought to identify how much of Twitter users could be “trusted,” meaning they weren’t spam bots or deleted accounts. To do this, they “scaled up” to verified users. Researchers considered non-verified users to be trusted only if they had interactions initiated by and with a verified account. The researchers used an algorithm to separate out verified, verified-adjacent users from spam (and potentially troll accounts), where trustworthiness increased the fewer degrees a user was separated from a verified account.

Microsoft’s researchers weren’t looking at “trust” in the moral context. But their methods of identifying accounts based on verified user interactions, bolsters the significance of Twitter’s algorithm which float opinions from verified to the top.

Another study from University of Pittsburgh, Northeastern University, and Cornell University found that celebrities, or verified users, were retweeted at a higher rate than non-verified accounts. And despite making up less than 25 percent of total users, retweets from celebrity accounts make up 75 percent of all retweets.

Advertisement

The resulting aquarium effect — where non-verified users look on and amplify messages from verified users — means that opinions like Spencer’s are assumed by the public to have the same validity and veracity of President Barack Obama, the New York Times, and filmmaker Ava Duvernay.

The issue can be tied back to fake news. False stories can gain traction, at least on Twitter, if a verified user or prominent people amplify it. For example, rapper B.o.B. espousing his belief that the world is flat, or more recently, incoming NSA Director Michael Flynn’s son tweeted that former presidential candidate Hillary Clinton could be connected to a pedophile ring run out of a Washington, D.C. pizza shop. (Both theories have been widely debunked.)

Twitter and Facebook have been heavily criticized for failing to moderate user content that promotes inaccurate information or exhibits abusive behavior. In the case of white nationalists, Twitter’s response has been uneven. The company waited until actress and comedian Leslie Jones was bullied off the platform before it banned accounts — including the one belonging to Breitbart editor Milo Yiannopoulos — linked to racist and sexist messages directed at her. Twitter’s decision to verify Spencer and other accounts connected to white nationalism is another inconsistency that shows the platform’s hesitation for infringing on speech.

In a Q&A Tuesday, Twitter CEO Jack Dorsey asked NSA whistleblower Edward Snowden what should be done about fake news. Snowden’s response, in part, defended tech companies that have been asked to moderate content.

“There’s been responses that I think are quite negative,” he said, adding that the solution to bad speech isn’t censorship. “The problem of fake news isn’t solved by hoping for a referee but we as users helping each other out…The answer to bad speech is more speech,” he said. “Critical thinking matters.”

That has been Twitter’s go-to approach — one that has helped create a platform that is often used for targeted harassment and abuse disguised as harmless opinions. Snowden has a point in that it takes unified voices to drown out the malice. However, because Twitter’s verification process values some people’s opinions over others, it’s going to take a united front of verified users to drown out the bad speech.

Or Twitter could adopt the world’s view, that check marks indicate more than just public interest. They’re badges of authority, and as with any macrocosm of society, it’s imperative to think carefully about to whom that power is given.