A bombshell investigation by ProPublica published Wednesday sheds light on how Facebook trains its thousands of content moderators to police hate speech, including a leaked training slide that suggests they “protect” “white men” but not “black children.”
The report, which includes internal training documents used by Facebook moderators, details how the social network’s policies on hate speech “tend to favor elites and governments over grassroots activists and racial minorities.”
Here are the highlights:
- Facebook trains its thousands of content moderators to delete hate speech directed at so-called protected categories of people, including race, sex, gender, and religious affiliation. Hate speech against “subsets” of protected categories, such as “radicalized Muslims,” is considered less subject to censorship. To illustrate the difference between protected categories and unprotected subsets, a training slide asks moderators whether they “protect” “white men,” “female drivers,” or “black children.” The correct answer provided on the slide is “white men.” President Donald Trump’s 2016 Facebook posts about barring Muslims from entering the US violated Facebook’s internal rules on hate speech. But CEO Mark Zuckerberg personally intervened to keep the posts from being deleted. Facebook uses the US State Department’s list of designated terrorist groups and other similar databases to help it monitor hate speech. But it also keeps a “secret list” of designated “hate organizations” that it bars.
When asked for comment on ProPublica’s report, a Facebook representative directed Business Insider to a post on Facebook’s company blog about moderating hate speech that was published on Tuesday. In the post, Richard Allan, a VP of public policy, said Facebook deleted 66,000 hate-speech posts per week over the past two months.
“But it’s clear we’re not perfect when it comes to enforcing our policy,” he wrote. “Often there are close calls – and too often we get it wrong.”
You can read ProPublica’s full investigation on its website. The report comes after The Guardian detailed Facebook’s rules on moderating sex and violence last month.