Facebook: Our Policing System Detects Violence, Nudity More Than Hate Speech

PARIS, FRANCE - APRIL 06: In this photo illustration, the logo of the Messenger and Facebook applications are displayed on the screen of an Apple iPhone on April 06, 2018 in Paris. In the midst of turmoil following ... PARIS, FRANCE - APRIL 06: In this photo illustration, the logo of the Messenger and Facebook applications are displayed on the screen of an Apple iPhone on April 06, 2018 in Paris. In the midst of turmoil following the Cambridge Analytica scandal, Facebook faces a host of questions regarding its privacy and confidentiality practices. Messenger, the messaging application launched by Facebook, is in the center of attention. Indeed, Facebook allows itself to analyze the links and images that users send to Messenger and even to read the messages exchanged if they are posted, in order to make sure that the contents comply with the conditions of use. This increased monitoring of Facebook within its messaging application was confirmed by Mark Zuckerberg a few days ago.(Photo Illustration by Chesnot/Getty Images) MORE LESS
Start your day with TPM.
Sign up for the Morning Memo newsletter

SAN FRANCISCO (AP) — Getting rid of racist, sexist and other hateful remarks on Facebook is challenging for the company because computer programs have difficulties understanding the nuances of human language, the company said Tuesday.

In a self-assessment, Facebook said its policing system is better at scrubbing graphic violence, gratuitous nudity and terrorist propaganda. Facebook said automated tools detected 86 percent to 99.5 percent of the violations in those categories.

For hate speech, Facebook’s human reviewers and computer algorithms identified just 38 percent of the violations. The rest came after Facebook users flagged the offending content for review.

Tuesday’s report was Facebook’s first breakdown of how much material it removes for violating its policies. The statistics cover a relatively short period, from October 2017 through March of this year, and don’t disclose how long, on average, it takes Facebook to remove material violating its standards. The report also doesn’t cover how much inappropriate content Facebook missed.

Nor does it address how Facebook is tacking another vexing issue — the proliferation of fake news stories planted by Russian agents and other fabricators trying to sway elections and public opinion.

It’s not surprising that Facebook’s automated programs have the greatest difficulty trying to figure out differences between permissible opinions and despicable language that crosses the line, said Timothy Carone, who teaches about technology at the University of Notre Dame.

“It’s like trying to figure out the equivalent between screaming ‘Fire!’ in a crowded theater when there is none and the equivalent of saying something that is uncomfortable but qualifies as free speech,” Carone said.

Facebook said it removed 2.5 million pieces of content deemed unacceptable hate speech during the first three months of this year, up from 1.6 million during the previous quarter. The company credited better detection, even as it said computer programs have trouble understanding context and tone of language.

Facebook took down 3.4 million pieces of graphic violence during the first three months of this year, nearly triple the 1.2 million during the previous three months. In this case, better detection was only part of the reason. Facebook said users were more aggressively posting images of violence in places like war-torn Syria.

The increased transparency comes as the Menlo Park, California, company tries to make amends for a privacy scandal triggered by loose policies that allowed a data-mining company with ties to President Donald Trump’s 2016 campaign to harvest personal information on as many as 87 million users. The content screening has nothing to do with privacy protection, though, and is aimed at maintaining a family-friendly atmosphere for users and advertisers.

The report also covers fake accounts, which has gotten more attention in recent months after it was revealed that Russian agents used fake accounts to buy ads to try to influencethe 2016 elections.

Facebook previously estimated fake accounts as accounting for 3 percent to 4 percent of its monthly active users. Tuesday’s report said Facebook disabled 583 million fake accounts during the first three months of this year, down from 694 million during the previous quarter. Facebook said the number tends to fluctuate from quarter to quarter. Facebook said more than 98 percent of the accounts were caught before users reported them.

Latest News
1
Show Comments
Masthead Masthead
Founder & Editor-in-Chief:
Executive Editor:
Managing Editor:
Deputy Editor:
Editor at Large:
General Counsel:
Publisher:
Head of Product:
Director of Technology:
Associate Publisher:
Front End Developer:
Senior Designer: