Journal content

Facebook again fails to detect hate speech in ads

SAN FRANCISCO – The test couldn’t have been much easier – and Facebook has consistently failed.

Facebook and its parent company Meta have once again failed a test to determine how well they could detect overtly violent hate speech in ads submitted to the platform by nonprofit groups Global Witness and Foxglove. .

The hate messages have centered on Ethiopia, where internal documents obtained by whistleblower Frances Haugen showed that Facebook’s ineffective moderation “literally stokes ethnic violence”, as she said in her testimony to the Congress in 2021. In March, Global Witness conducted a similar test with hate speech in Myanmar, which Facebook also failed to detect.

The group created 12 text ads using dehumanizing hate speech to call for the killing of people from each of Ethiopia’s three main ethnic groups – the Amhara, Oromo and Tigrayan. Facebook’s systems approved the ads for posting, just as they did for the Myanmar ads. The advertisements were not published on Facebook.

This time, however, the group notified Meta of the undetected violations. The company said the ads should not have been approved and highlighted the work it has done “enhancing our ability to detect hateful and inflammatory content in the most widely spoken languages, including Amharic.”

A week after hearing from Meta, Global Witness submitted two more ads for approval, again with blatant hate speech. The two advertisements, still in text written in Amharic, the most used language in Ethiopia, were approved.

Meta did not respond to several messages for comment this week.

“We selected the worst cases we could think of,” said Rosie Sharpe, an activist with Global Witness. “The ones that should be easiest for Facebook to detect. It wasn’t coded language. It wasn’t dog whistles. It was explicit statements that this type of person is not a human being or that such people should starve to death.

Meta has always refused to say how many content moderators it has in countries where English is not the primary language. This includes moderators in Ethiopia, Myanmar and other regions where material posted on the company’s platforms has been linked to real-world violence.

In November, Meta said he removed a post from Ethiopia’s prime minister that urged citizens to rise up and “bury” rival Tigray forces that threatened the country’s capital.

In the since-deleted post, Abiy said that “the obligation to die for Ethiopia belongs to all of us”. He called on citizens to mobilize “holding any weapon or ability”.

Abiy, however, continued to post on the platform, where he has 4.1 million followers. The United States and others have warned Ethiopia against “dehumanizing rhetoric” after the prime minister called Tigray forces “cancer” and “weeds” in comments made in July 2021.

“When advertisements calling for genocide in Ethiopia repeatedly run through Facebook’s network – even after the issue was reported to Facebook – there is only one possible conclusion: there is no one there. home,” said Rosa Curling, director of Foxglove, a London-based legal nonprofit that has partnered with Global Witness in its investigation. “Years after the genocide in Myanmar, it’s clear that Facebook hasn’t learned its lesson.”