New investigation exposes how Facebook approved ads inciting violence against US election workers, while YouTube and TikTok were able to detect and reject these ads. 

Facebook was unable to detect three quarters of test ads explicitly calling for violence against and killing of US election workers ahead of the heavily contested midterm elections earlier this month, according to a new investigation by Global Witness and New York University Tandon School of Engineering’s Cybersecurity for Democracy (C4D) team.

The investigation tested Facebook, TikTok and YouTube’s ability to detect ads that contained death threats against election workers and revealed starkly contrasting results for the social media giants: YouTube and TikTok suspended our accounts for violating their policies, whereas Facebook accepted 75% –15 of the 20 – advertisements containing death threats that we submitted to them for publication.  

The ads contained ten real-life examples of death threats issued against election workers and included statements that people would be killed, hanged or executed, and that children would be molested. We tested both English and Spanish language versions of these ads and submitted them on the day of or the day before the midterm elections. Global Witness and C4D are not publishing these ads publicly due to the violent speech they contain [1].

Once Facebook approved the ads for publishing, Global Witness and C4D removed the ads before they would be displayed on the platform in order to avoid spreading hateful and violent speech.

This investigation comes on the heels of a recent report by Global Witness and C4D that showed that Facebook also failed to fully detect election disinformation ads ahead of midterms, including ads that provided the wrong election date and ads attempting to delegitimize the electoral process. That investigation similarly tested TikTok and YouTube’s abilities to detect election disinformation. Unlike this investigation, TikTok approved 90% of the election disinformation ads.

“It’s incredibly alarming that Facebook approved ads threatening election workers with violence, lynching and killing – amidst growing real-life threats against these workers,” said Rosie Sharpe, investigator at Global Witness. “This type of activity threatens the safety of our elections. Yet what Facebook says it does to keep its platform safe bears hardly any resemblance to what it actually does. Facebook’s inability to detect hate speech and election disinformation – despite its public commitments – is a global problem, as Global Witness has shown this year in investigations in Brazil, Ethiopia, Kenya, Myanmar and Norway."

Facebook’s failure to block ads advocating violence against election workers jeopardizes these workers’ safety. It is disturbing that Facebook allows advertisers caught threatening violence to continue purchasing ads. Facebook needs to improve its detection methods and ban advertisers that promote violence,” said Damon McCoy, co-director of C4D.    

Global Witness approached Facebook’s owner, Meta, for comment on these findings and a spokesperson responded: ”This is a small sample of ads that are not representative of what people see on our platforms. Content that incites violence against election workers or anyone else has no place on our apps and recent reporting has made clear that Meta’s ability to deal with these issues effectively exceeds that of other platforms. We remain committed to continuing to improve our systems.”

We asked Meta for the evidence that supports the claim that the platform is better at dealing with incitement to violence than other platforms.  Meta provided quotes published in the media that note that Meta has more resources devoted than other platforms and that it does better at moderation than some alt right platforms. While these assertions may be factual, they don’t constitute evidence that Meta is better at detecting incitement to violence than other mainstream platforms. In addition, there should be no tolerance for failure before a major election, when tensions and potential for harm are high.

Global Witness and C4D call on social media platforms to adequately resource their content moderation around the world to ensure their products are safe to use. See full list of recommendations that Facebook must implement urgently here