Advertisement

Proposed Bill Could Make Social Media Companies Report Terrorism-Related Content

The image used in Weh’s campaign video CREDIT: SCREENSHOT/YOUTUBE
The image used in Weh’s campaign video CREDIT: SCREENSHOT/YOUTUBE

Congress may soon require tech companies to treat terror-related content like child pornography by reporting suspicious activity to law enforcement agencies.

The Senate Intelligence Committee approved a version of the intelligence authorization bill for 2016 last week, a funding measure that would require social media sites such as Twitter and YouTube to alert law enforcement officials of activity that could be linked to the Islamic State (ISIS) and other suspected terror groups.

“In our discussions with parts of the executive branch, they said there have been cases where there have been posts of one sort or another taken down,” that could have provided valuable intelligence, a committee aide anonymously told the Washington Post.

The bill, which hasn’t yet been filed and awaits a Senate vote, would only apply to companies that actively monitor their site’s content and electronic communications, including email service providers such as Google. The committee modeled the bill after the 2008 Protect Our Children Act, which mandates tech companies to report sexually explicit or suggestive pictures of children to the National Center for Missing and Exploited Children (NCMEC), which is partly funded by the Justice Department.

Advertisement

The pending legislation would potentially give social media companies discretion to decipher what content is terrorism related or originated from government-watched targets — a task tech companies don’t necessarily want and consider a consumer privacy violation.

If passed, the added task of monitoring content for potential terror-inciting posts, images or video could prove trickier than spotting obscene images such as child pornography, which is famously defined as something that is known when it’s seen. Content moderators for companies such as Facebook, Google, Yahoo and Twitter would have to move beyond snap judgments of nude or grotesque images typically subjected to take down requests, and to consider the intent and source behind what users post.

“Asking Internet companies to proactively monitor people’s posts and messages would be the same thing as asking your telephone company to monitor and log all your phone calls, text messages, all your Internet browsing, all the sites you visit,” a tech industry official anonymously commented to the Post. “Considering the vast majority of people on these sites are not doing anything wrong, this type of monitoring would be considered by many to be an invasion of privacy. It would also be technically difficult.”

Despite expressed disapproval for monitoring potential terror threats, social media platforms have increasingly become more inclined to scrutinize user behavior for better or worse. Facebook especially has proactively taken down accounts that could smack as child pornography and has grappled with whether it should take down violent footage from fights in public to acts of terror such as the Boston Marathon bombing in 2013 or the ISIS-leaked beheading of photojournalist James Foley last year.

Law enforcement has increasingly relied on social media data and tech companies’ cooperation to bolster criminal investigations. Speaking after Google reported a Houston man who kept child pornography in his email to NCMEC officials last year, Houston Detective David Nettles told a local news outlet that without Google’s help, the police wouldn’t have been able to act: “He was trying to get around getting caught, he was trying to keep it inside his email…I would never be able to find that…I really don’t know how they do their job, but I’m glad they do it.”