Back in March, we began testing the ability of social media companies to detect prohibited content with an investigation in which we found that Facebook failed to identify hate speech against the Rohingya minority in Myanmar. What we found next exposed stark differences in platforms’ abilities to detect content that wildly breaches their policies and large divergences in how users around the world are treated.

We conducted ten separate investigations in Myanmar, Ethiopia, Kenya, Norway, Brazil (three times), and the USA (three times). In each we submitted explicitly prohibited content in the form of adverts, presented in clear text, for the platforms to review and approve or reject for publication. (After the review process was complete, we removed the ads before they were published on the site.)

For example, in Myanmar and Ethiopia we tested Facebook’s ability to detect calls to genocide; in Ethiopia and Kenya we tested whether they would spot incitements to violence; and in Norway we tested their detection of extreme hate speech and disinformation. In Brazil and the USA we assessed Facebook’s ability to catch election disinformation, expanding our study in both countries to include YouTube and in the USA to include TikTok. Finally, we tested the ability of all three companies to detect death threats against election workers in the USA. 

The results revealed shocking failures and large inconsistencies among platforms and their practices in different countries

Facebook’s multiple failings

Our research showed widespread failures by Meta (Facebook’s parent company) to implement its content moderation policies. The company approved ads that clearly violated its policies in every investigation we conducted, and in five of our investigations it did not reject a single one of our hate speech or election disinformation ads for publication:

  • In Myanmar, we submitted eight ads in Burmese containing real-life examples of hate speech inciting violence and genocide against the Rohingya taken from a UN fact-finding mission. All of the ads were accepted for publication. 
  • In Ethiopia, we submitted 12 ads in Amharic containing hate speech inciting violence and genocide during the ongoing civil war. All of the ads were accepted for publication. After informing Meta of this serious problem with their content moderation in Ethiopia, and a spokesperson acknowledging that the ads “shouldn’t have been approved in the first place as they violate our policies,” we submitted another two examples of real-life Amharic-language hate speech. Both ads were, again, accepted for publication. 
  • In Kenya, we submitted 20 ads, half of them in Swahili, half in English, containing hate speech and ethnic-based calls to violence ahead of elections in the country.  Our English language hate speech ads were initially rejected for failing to comply with Meta’s Grammar and Profanity policy. Meta invited us to update the ads, and after making minor corrections to the grammar and removing swear words, all of the English language ads were accepted for publication. All of the Swahili language ads were accepted for publication without any editing.   
  • In Brazil, we submitted 10 ads in Portuguese containing election disinformation ahead of elections in the country. All of the ads were accepted for publication. In addition, we posted the ads from outside Brazil from an account that had not been through the “ad authorisations” process that Meta says they require to be able to post election ads.   
  • In Norway, we submitted 12 ads containing extreme hate speech and disinformation. These included racist, anti-immigrant and anti-LGBTQ hate speech, health disinformation and extreme dieting messaging. All of the ads were accepted for publication. 

One might assume that these platforms would be better at moderating English-language content than that of less widely spoken languages. However, we found that Facebook was totally unable to detect any of the English-language content that we submitted to them in Kenya and Norway:

  • In our investigation in Norway, we submitted 12 ads, a quarter of which were in English and three-quarters were in Norwegian. Meta approved all 12 ads for publication.
  • Our investigation in Kenya used ads in English and in Swahili. Both sets of ads were accepted in their entirety.

Treatment in the US versus elsewhere

Our studies showed that Facebook and YouTube treated users differently depending on where in the world they are. The companies appear to put more effort into content moderation for users in the US (in English or Spanish) than users in the other countries we’ve examined. Despite this, Facebook still failed to detect content violations in the US. In an investigation ahead of the November midterm elections in the USA, we created ten ads in English and ten ads in Spanish containing election disinformation, and submitted them to Facebook, TikTok and YouTube: 

  • Unlike in many other countries where Facebook failed to detect any violating ads, the company was partially effective at detecting ads in the USA. In a first test in early October, with ads posted from the UK, 30% of the ads in English were approved along with 20% of the ads in Spanish. We tested the ads again two days later, this time posting from a different account within the USA. While the percentage of English ads approved dropped to 20%, the percentage of Spanish disinformation ads approved rose to 50%.
  • TikTok approved a whopping 90% of the US disinformation ads. In a subsequent test of content containing death threats against US election officials, the platform seemingly successfully identified the violations and suspended the account. 
  • YouTube succeeded both in detecting 100% of the US disinformation ads and suspending the account and the channel carrying them, demonstrating its content moderation appeared to be working as intended. However the company performed very differently in our third test of election disinformation ads in Brazil (ahead of the presidential run-off in October), in which it accepted every ad we submitted for publication. The same study saw Facebook approve 50% of the ads, including some which it had previously rejected in our second test of Facebook’s ability to moderate ads in Brazil.

Facebook versus TikTok versus YouTube

Our investigations that tested multiple platforms with the same ad content showed mixed detection rates among the companies.

  • An initial investigation in the lead up to the US midterms in November assessed Facebook, TikTok, and YouTube’s detection of election disinformation. TikTok performed worst, approving 90% of ads, followed by Facebook which approved 20%-50% of ads, while YouTube approved none of our ads.
  • We conducted a parallel investigation around the midterms to test the ability of all three platforms to detect ads that contained death threats against election workers. In starkly contrasting results, YouTube and TikTok suspended our accounts for violating their policies, whereas Facebook accepted for publication 90% of the ads containing death threats we submitted to them in English and 60% of the ads we submitted in Spanish.  
  • In Brazil, our third test of election disinformation saw Facebook approve 50% of the ads (including some which it had previously rejected in our second Brazil test) whereas YouTube performed worse, approving 100% of the content.

Collaborative findings

Most of these investigations and their results reflect joint work carried out with excellent partners. We credit the investigations’ successes to these fruitful collaborations:

  • We partnered with the legal non-profit Foxglove  and Ethiopian researcher Dagim Afewerk Mekonnen to carry out our investigation in Ethiopia, and again with Foxglove in our investigation in Kenya.
  • Alongside our investigation in Brazil, we collaborated with NetLab at the Federal University of Rio de Janeiro who conducted a parallel study into the number of ads on the Facebook Ad Library that contained election-related disinformation. We also organised a sign-on letter for Global North organisations to amplify the social media asks of 100+ Brazilian organisations.
  • We partnered with the Cybersecurity for Democracy (C4D) team at New York University’s Tandon School of Engineering to carry out investigations into content around the US midterm elections.
  • Our investigation into extreme content in Norway was carried out in partnership with SumOfUs.

Impact

Alongside widespread media coverage of our investigations, we’ve received some promising signs from the companies themselves. In the wake of our investigation in Brazil, Facebook announced it would “prohibit ads calling into question the legitimacy of the upcoming election”, though this left more to be desired to demonstrate a change in enforcement rather than a rehashing of existing policy.

Following our investigation in Norway, the justice committee in the Norwegian parliament published their comments on regulating digital services in which they followed our recommendations, with every party in the committee voting for an investigation of a ban on behaviour-based marketing. Where the committee led, the whole of parliament then followed: in a unanimous vote the Norwegian parliament has called for the government to investigate a ban on advertising based on the mass collection of personal data and profiling.

In the year ahead, we’ll continue to investigate the ability of these major social media companies to enforce their policies and to prevent incitement to violence and impediments to democratic processes. With a slew of national elections set for 2024, these platforms have urgent work to do to safeguard electoral integrity and protect people from violence.

Author

  • Henry Peck

    Campaigner, Digital Threats