Skip navigation
data on screen

Hate Speech Presents Significant Challenge for Facebook AI

Mark Zuckerberg has said that it could be a decade before Facebook AI can automatically identify hate speech, but, even then, AI may be only part of the solution.

Facebook CEO Mark Zuckerberg spent two days last month testifying in front of Congress, and one of the many subjects that House and Senate members asked him about was hate speech--specifically, how Facebook deals with it and whether the company could do better.

It will be “five to 10 years” before Facebook has the machine learning tools required to automatically detect hate speech, Zuckerberg said when testifying before a joint Congressional committee focused on Facebook’s data policies: “I am optimistic that over a five-to-10-year period we will have AI tools that can get into some of the linguistic nuances of different types of content to be more accurate in flagging content for our systems, but today we’re just not there on that.”

That statement came in response to a question from Rep. John Thune, a Republican who wanted to know about the steps Facebook takes to detect hate speech on the platform--and the challenges of that job.

But is the Facebook AI timeline in line with that of the rest of the industry? What are the current capabilities for artificial intelligence and machine learning when it comes to picking out hate speech, even nuanced speech, on a platform like Facebook? And, if the technology to effectively weed out problematic speech isn’t yet ready, when can we expect that it will be?

Defining Hate Speech

Facebook does have technology that helps flag some problematic content, Zuckerberg told the Congressional committee. For example, he said, 90 percent of pro-ISIS or Al Qaeda content is flagged by machines instead of humans. Facebook also uses automation to flag other kinds of speech--for example, speech that indicates that a person may be at risk of self harm.

But part of the problem of expanding those capabilities, for Facebook and other outlets, is defining what hate speech is in the first place, said Brian Cugelman of AlterSpark.

“We need to ask a preliminary question, which is: Can a judge who specializes in hate speech detect and classify hate speech?” Cugelman said. “How do they handle borderline cases? What about bigots who intentionally hide their hate speech in coded metaphors? How about degrading comments that make no call for violence? How about those that glorify and defend the worst bigots, but themselves don't take explicitly say the words?”

Doing Away with Humans?

AI is a term used to describe standard statistics with machine learning and algorithms, Cugelman said. The algorithms replicate patterns trained by humans, which means AI is normally less effective than human judgement but has advantages for scalability.

Facebook can certainly benefit from these advantages, given its enormous user base and the massive volumes of content those users post. But relying on AI over human judgement means that some speech would be incorrectly classified.

In 2006, Cugelman worked on a research study for the European Union that looked for trends that could be detected automatically and then escalated to politicians.

“They concluded that automated detection could not do this without human interventions combined, so I think the best we can hope for is machine learning combined with risk scoring and human interventions,” he said. “This makes it very expensive to manage online hate.”

Bill Ottman, security expert and founder of Minds.com, noted the difficulty members of Congress and Zuckerberg had in coming to a consensus: “A couple questions, especially from [Senators Ted Cruz (R-TX) and Orrin Hatch (R-UT)], honed in on political bias in content policy enforcement and the blurry terms around what is and what is not acceptable speech,” said  “However, the general answer from Facebook seemed to communicate that more AI was going to be developed to combat these issues along with an army of content patrollers.”

This means AI may only be a partial solution to the problem of hate-speech detection, even in Zuckerberg’s aimed-for future.

“Hate speech can be pretty subjective, and while AI could be trained to identify keywords or phrases and ‘flag' them as potentially hateful, it is likely it would still need human intervention to review and make the determination on whether a given word or phrase is hateful,” said Alan Taylor, a project manager with Ivanti.

Ottman said that AI has its benefits in this area, but argues that it’s not a foundational solution in itself--and can be dangerous if it’s too opaque or improperly managed and monitored.

“Content patrollers can certainly be useful for policing illegal content, but beyond that they are only as beneficial as the terms of the site are,” Ottman said. “The emerging scientific consensus about censorship of law-abiding content is that it actually amplifies violence and extremism for a variety of different reasons.”

In the United States, it can be unclear as to what legally constitutes hate speech, and Facebook has the added burden of deciding when to censor or remove offensive content even if it does not technically qualify as hate speech according to the law.

Relying on an Existing Solution

There’s also the question of whether there is truly a need for AI when it comes to finding hate speech--on Facebook or elsewhere.

“You don't need AI to find hate speech--it's all over,” Cugelman said. Most of that speech is transparent, he said, but some of it is somehow veiled in metaphors or double meanings in order to avoid breaking laws while still making the intention of the speech clear to those who understand its references.

“AI has no hope of detecting a well-educated bigot because they're intentionally subverting laws,” Cugelman said.

Facebook already has an easy way to identify hate speech, Cugelman pointed out: its victims. Those who are targeted by hate speech, or who are otherwise upset about certain content, can report it on Facebook and other social media platforms, he said. The technology to assess those reports--human judgement--is already here.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish