20 June 2023, London - Extreme and violent hate-filled adverts were approved for posting on leading social media platforms Facebook, TikTok and YouTube in South Africa, according to a new test carried out by Global Witness and the South African public interest law firm Legal Resources Centre. 

The test saw ten adverts, based on real life content, in English and translated into Afrikaans with nine of those translated into Xhosa and Zulu. They were then sent for approval on all three platforms and included calls on the police in South Africa to kill foreigners, referred to non-South African natives as a “disease”, as well as inciting violence through “force” against migrants.    

Despite the extreme and unequivocal nature of the content, every single ad was approved for publication by all three platforms, apart from one ad that Facebook rejected in English and Afrikaans, although this was approved in Xhosa and Zulu. The post calling for the police to kill “illegal foreigners” was approved in all languages by all three platforms. After being approved for publication Global Witness and LRC removed all the ads before they were published.   

The findings come as the UN marks World Refugee Day and follows warnings by its human rights experts that South Africa is "on the precipice of explosive xenophobic violence". Last year violent anti-migrant protests in Johannesburg saw the death of a Zimbabwean man, with South African police forces increasing their numbers in response to the threat posed by Operation Dudula, a group organising much of these protests.  

Sherylle Dass, Regional Director, at the Legal Resource Centre said:  

“With rising tensions over the last couple of years and in the lead up to an important election year for South Africa in 2024, we are deeply concerned that social media platforms are neglecting their human rights responsibilities here. We know from history that social media campaigns can result in real-world violence, so it is imperative that the platforms don’t overlook South Africa and take proactive steps now to protect livelihoods and lives in future.”  

Global Witness has now run more than ten similar tests on social media companies’ ability to tackle online hate, including in Brazil, Ethiopia, Ireland, Kenya and Myanmar. The findings each time have shown that there are glaring holes in how these companies detect, block and remove hate from being spread across their platforms. 

Hannah Sharpe, Digital Threats Campaigner at Global Witness, said:  

“This is another disappointing outcome for social media companies. It is not a one-off but a repeated failure to enforce their own policies on hate speech and the incitement of violence on their platforms. As we approach one of the most significant election years globally, where xenophobic rhetoric will be exploited for political gain, we expect, at the very least, that platforms uphold their own existing policies - having them on paper is meaningless unless they are actively enforced. The devastating consequences of this negligence was evident in Myanmar (Burma) and Ethiopia. Failure to enforce these policies has real-world impacts and costs lives.”  

In response to these findings, Meta and TikTok said that the hate speech in the ads violates their polices, that their systems are not perfect and that ads go through several layers of verification. Google was approached for comment but did not respond. 

Global Witness and the Legal Resources Centre call on Meta (Facebook’s parent company), Google (who own YouTube) and TikTok to invest in equitable and effective safeguards to protect democratic and human rights around the world.