Twitter’s Small Steps Toward Ending Online Harassment


Twitter expanded its policies this week for better and possibly for worse. The microblogging site banned harassing and violent language such as direct threats or tweets that promote violence against other people — a win for harassment victims. But, to much disappointment, the company also unveiled a new opt-in direct messaging function that lets users exchange private messages without following one another.

The microblogging site has a track record for halfheartedly addressing its harassment problem. But while the new changes won’t stop the the abusive vitriolic threats online, they are a thoughtful effort in a balancing user safety with free expression.

Twitter is well aware of its pervasive harassment problem. Twitter CEO Dick Costolo told employees in an internal memo earlier this year, “We suck at dealing with abuse and trolls on the platform and we’ve sucked at it for years. It’s no secret. There’s no excuse for it.”

Despite repeated calls over the years for meaningful policies address and deter harassment, the company only vowed to make such changes last year when Zelda Williams threatened to quit after receiving graphic images of her father, Robin Williams, in the wake of his death.

Since then, Twitter has incrementally rolled out changes, partnered with Women, Action, and the Media to develop comprehensive policies that better address the site’s rampant abuse problem.

Marginalized groups such as people of color, women and the LGBT community tend to suffer the brunt of abuse online. Twitter’s open climate, which can foster intense but productive debates on controversial issues such as gun rights and police brutality, also makes it easy for attacks.

On Tuesday, Twitter announced it would lock accounts that continuously post hateful tweets — like Twitter jail for trolls — for a pre-determined amount of time. Offending users would then be prompted to delete offensive tweets. Twitter also made it easier for users to “tune out” abusive tweets, such as death and rape threats, with a filter that only shows them if users seek them out.

Those changes came on the heels of another seemingly contradictory change to direct messages many bemoaned as yet another change that would green light abuse. Twitter users slammed the company Monday when the company announced that direct messaging would no longer require users to follow one another, a move critics hailed as another thick-headed fumble that ignored persistent harassment concerns.

But while the juxtaposition of this week’s policy changes may seem awkward and counterproductive, it highlights social media’s greatest challenge: making the internet safe enough so people aren’t hit with a wave of violent threats every time they sign on, but open enough to encourage the discussion of hard, uncomfortable and sometimes graphically violent topics.

Social media platforms including Facebook and Reddit have struggled, and in a large part failed, to strike the right balance in protecting speech and preventing online abuse. That’s in part because the things users love about social media — rapid-fire connections to people who would otherwise be unreachable — also make online communities ripe for harassment.

Several celebrities and public figures have quit using the site because of harassing and bullying behavior. Actress Ashley Judd filed police reports against her Twitter harassers to send a message that threatening sexual violence is “not okay.”

A chief complaint users had was that reporting abusive behavior to Twitter didn’t do much. The company introduced a new function in March that facilitates users reporting physical threats directly to law enforcement.

Solving these problems is an admitted work in progress that is far from perfect. But Twitter’s new policies at the very least inch towards a better resolution.