Tumblr Icon RSS Icon

Facebook Sexism, YouTube Attacks On Feminist Frequency, And How Hate Speech Make Tech Take Sides

Posted on  

"Facebook Sexism, YouTube Attacks On Feminist Frequency, And How Hate Speech Make Tech Take Sides"

Share:

google plus icon

Yesterday, my colleague Rebecca Leber reported that “Seven days after Women Action and the Media, the Everyday Sexism Project and activist Soraya Chemaly called on Facebook to remove content that condones hate speech and violence against women, Facebook responded that it will update its policies that add a new emphasis to taking domestic violence seriously.” Promising to work closely with the coalition of groups that organized the campaign that got 15 companies to drop their advertising from Facebook as long as the social media service continued to treat memes encouraging or praising domestic violence as if they didn’t qualify as gender-based hate speech, Facebook pledged to “review and update” the guidelines for what constitutes such speech, retrain the teams that respond to flagged items, tie users verified identity more closely to some content that qualifies as “cruel or insensitive,” and to establish more formal working relationships with women’s organizations. This is a significant victory for Women Action and the Media, Everyday Sexism, and Soraya Chemaly. But the same day, an event happened that illustrated how far technology and social media companies have to go in accommodating themselves to the realities–and limitations–of the communities that make them valuable.

To much less notice on Tuesday, Anita Sarkeesian, the feminist culture critic who was relentlessly harassed and threatened for the simple act of Kickstarting a project to examine the representation of women in video gaming, posted the latest video in that series, Tropes vs. Women. What followed was predictable. “Looks like my harassers abused YouTube’s flag function to get my new Tropes vs Women video removed. Not the first time it’s happened,” Sarkeesian wrote on Twitter. “An hour after our video went live I got an email saying ‘The YouTube Community has flagged one or more of your videos as inappropriate.’ Here’s the “community flagged” removal notice from YouTube. I appealed and 45 mins later my video was restored: pic.twitter.com/wilya1PHsF.”

In other words, the YouTub system worked exactly like the women’s coalition would like Facebook’s system to work. The content was reported as offensive was taken down quickly and preemptively, and the person who created it was required to go through an appeals process to get it back online, but after an adjudication, Sarkeesian’s video did get back in front of the audience who wanted to see it. The problem was, it worked to the detriment not of content that advocated or minimized the impact of violence against women, but to the harm of content that is explicitly aimed at the opposite.

Taken together, these two events illustrate the challenge companies like Facebook and YouTube face as they seek to regulate content on the basis of user complaints: there is no “YouTube Community” or “Facebook Community” with an agreed-upon set of standards for what constitutes hate speech or inappropriate content. There are multiple communities that are in some cases violently at odds. And if social media or technology companies want to keep some of their users–and as it seems, some of their advertisers–those companies may have to decide between their user communities when they come into conflict.

This is in violation of both tech-libertarian ideals and market principals that suggest that internet communities should be able to regulate themselves successfully, editing out offensive content and expelling members who don’t adhere to stated or unwritten codes of conduct. In reality, this has proven to be less true. Gated communities like the pay-to-play site Ask Metafilter, or heavily moderated sites like Ta-Nehisi Coates’ blog at The Atlantic exist, but they’re considered exceptions rather than the general rule, which tends more towards a consensus around sentiments like “don’t read the comments.” Sites like Facebook and YouTube aren’t so much communities as platforms on which many communities, some of them dedicated to the eradication of the ideas or sentiments expressed by others, can operate. And these conflicts aren’t always simply a matter of community governance. Reddit’s attempts to identify the Boston Marathon bombers, for example, may have had real repercussions for falsely-identified suspects and their families.

It might be nice–and in the short term, profitable–for technology and social media companies to act like governments and promise unfettered free speech to all comers. But in the long run, given that clear, universal agreements are not likely to emerge on what constitutes hate speech or inappropriate content, companies like YouTube and Facebook are going to have to make their own decisions, and to stand with certain groups of users over others. WAM and others made this decision easier by demonstrating that advertisers are on the side of certain users when it comes to content that makes a joke or a virtue of violence against women. But other fights are sure to follow.

« »

By clicking and submitting a comment I acknowledge the ThinkProgress Privacy Policy and agree to the ThinkProgress Terms of Use. I understand that my comments are also being governed by Facebook, Yahoo, AOL, or Hotmail’s Terms of Use and Privacy Policies as applicable, which can be found here.