In an age of the ‘manosphere’ where women are under constant threat from misogynistic attacks online, our investigation shows test adverts containing extreme hate against women are approved for publication by Facebook, TikTok, X/Twitter, and YouTube.
GettyImages-1249861163.jpg

MARCH 2022: QAANITAH HUNTER, NEWS24 ASSISTANT EDITOR AT THE SANEF PICKET OUTSIDE PIETERMARITZBURG HIGH COURT DURING JACOB ZUMA CASE AGAINST NEWS24 JOURNALIST KARYN MAUGHAN, SOUTH AFRICA. CREDIT: DARREN STEWART/GALLO IMAGES VIA GETTY IMAGES

Imagine going to work and being told you “deserve a bullet in the head”, or that you are “a thing, a bitch, a lying bitch.” Imagine how you’d feel if these threats then targeted those closest to you, including your children. These are real-life examples of hate speech attacks on social media faced by women journalists, for simply doing their jobs. This is part of a terrifying global trend, seeing online violence against women journalists spilling offline. Large social media corporations, which are hosts to these horrific incidents, have hate speech policies designed to protect users. In light of this, we set out to test how good they are at enforcing these policies and detecting misogynistic hate speech on their platforms.

Our investigation: Testing social media platforms’ detection of misogynistic hate speech

Together with independent public interest law centre in South Africa, the Legal Resources Centre, we carried out a joint investigation looking at Facebook, TikTok, X/Twitter, and YouTube’s ability to detect and remove real-world examples of hate speech targeting women journalists. Rather than publishing the examples on the platforms as user content, we submitted them to all four platforms in the form of adverts, so they could be scheduled in the future and so that we could remove them before going live. This methodology is designed to test the platforms’ first line of defence against hate speech, before advertising is published, giving us an indication of their ability to identify and moderate actual hate speech that is live on the platform.

The test consisted of 10 adverts in four languages: English, Afrikaans, Xhosa, and Zulu (40 adverts total). Real-world examples of misogynistic hate speech were edited to clarify language and grammar, none were coded or difficult to interpret, text was illustrated by video footage, and all clearly violated the platforms’ advertising policies. The content followed the platforms’ own definitions of hate speech outlined in their policies: all targeted women specifically and were violent, dehumanising, expressed inferiority, contempt, and disgust. For example, the adverts referred to women as prostitutes, psychopaths, or vermin, and called for them to be beaten and killed. 

DTD South Africa FINAL opt1.jpg

EXAMPLE STILLS OF ENGLISH TEST ADVERTS USED IN INVESTIGATION 

The results: Widespread approval of violent hate speech test adverts across Facebook, TikTok, X/Twitter, and YouTube

Nearly all the test adverts were approved for publication by all four platforms. Meta and TikTok approved all 40 ads within 24 hours. YouTube also approved them all but flagged 21 of the 40 following an automated review adding an approved but ‘limited’ status, thus still deemed appropriate for some audiences. And X/Twitter approved them all, aside from two English adverts, which had their publication ‘halted’ (only after we conducted further tests into the platform’s approval process [1]). After capturing these results, we deleted all the test adverts before they were published.

Our tests show that social media corporations’ automated and AI informed content moderation systems are not fit for purpose if even the most extreme and violent forms of hate speech are approved for publication, in clear violation of their own policies. Whilst these new technologies are vital for moderating at scale, they are clearly not sophisticated enough to replace or justify a lack of investment in human moderators and fact-checkers.

We’ve previously conducted numerous investigations with this same methodology, repeatedly demonstrating social media platforms’ failure to enforce their hate speech policies and highlighting that this is a widespread, systemic and ongoing problem. In every case, we give corporations the opportunity to comment and address our findings. On numerous occasions, including previous investigations in Ethiopia, Kenya, and South Africa, Meta have caveated their response with: ‘we know that there will be examples of things we miss or we take down in error, as both machines and people make mistakes.’ The extent and severity of the errors seen in results gathered over the last two year suggest that there isn’t nearly enough action being taken. In fact, over the last year Google, Meta, and X/Twitter have all done the opposite by reportedly making drastic cuts to operations teams and contracting firms responsible for dealing with hate speech and disinformation. Although true figures are hard to determine in the mass scale of redundancies, it was reported that Google cut a unit responsible for misinformation, radicalization, and toxicity by a third and Meta lost hundreds of content moderators globally, including in Africa, whilst X/Twitter axed 15% of its trust and safety team.

Failure to address this problem enables online violence and puts the onus on victims to report their abusers, a process which is often described by journalists as being a futile exercise.

South African journalist Ferial Haffajee has spoken openly about the abuse she’s faced as Associate Editor of the Daily Maverick and former Editor-at-large at HuffPost South Africa. She told us: “After 29 years as a journalist, I should be bolder and more confident than ever but online hate and the threat of offline violence exhausts and terrifies me. It’s not just attacks from individuals, troll armies are often weaponised to cause insurmountable levels of abuse, which are impossible to stem through deleting and blocking alone. Along with many other journalists, I have tried to use the social media platforms’ reporting mechanisms and even contacted the companies directly, but it is to no avail. They knowingly turn a blind eye while playing host to assaults on women’s rights and media freedom.” 

“After 29 years as a journalist, I should be bolder and more confident than ever but online hate and the threat of offline violence exhausts and terrifies me.” - Ferial Haffajee, Associate Editor of the Daily Maverick

Other South African journalists who we spoke to as part of this investigation echoed that their experiences of misogynistic, violent, and sexualised abuse aren’t isolated cases but pervasive within the industry and a part of the job. In the South African context, many incidents are driven by politicians who incite their supporters to troll women journalists, with a goal to undermine and silence those who hold them to account. As well as being a serious threat to women’s freedom of speech, livelihoods, and personal safety, this gendered hate speech is therefore also a threat to media freedom and democracy.

Misogynistic hate threatening journalists in South Africa and beyond

Recent research shows that this is a global issue. In 2021, UNESCO published a report called ‘The Chilling’, finding that 73% of the 901 women journalists interviewed, across 125 different countries, said they had experienced online violence with 20% reporting that they had also been attacked offline in connection with the online violence. The study highlighted the heightened risk for Black, Indigenous, Jewish, Arab, and lesbian women who all reported the highest rates of online violence, due to the intersection with other forms of discrimination.

More broadly, social media is rife with attacks targeting women and girls. Online movements promoting misogyny and blaming feminism for social ills, namely the ‘manosphere’, have gained power and seeped into mainstream social media, normalising hate against women. In 2020 Plan International conducted a survey of 14,000 young women in 22 countries and found that 58% had been harassed online, with 22% saying they or a friend fear for their physical safety as a result.

GettyImages-1246526121 (1).jpg

JANUARY 2023: SELF PROCLAIMED MISOGYNISTIC INFLUENCER ANDREW TATE LEAVES ROMANIA’S ANTI-ORGANISED CRIME AND TERRORISM DIRECTORATE IN ROMANIA, CHARGED WITH RAPE AND HUMAN TRAFFICKING. CREDIT: MIHAI BARBU/AFP VIA GETTY IMAGES

A major concern reflected in analysis of our previous investigations is the large divergences in how social media corporations treat users around the world, with some platforms appearing to put more investment into content moderation in the US than the other countries we’ve examined. This is particularly concerning as we see that women experience online violence at even higher rates in global majority countries. We spoke to Alexandra Pardal, Campaigns Director at Digital Action who are convening a new movement called the Global Coalition for Tech Justice addressing these inequities. She said: “Social media corporations have failed to deal with the most egregious harms on their platforms everywhere in the world, but particularly in global majority countries like South Africa. We must challenge this lack of global equity and ensure that platforms urgently invest in protecting their users’ safety, regardless of where they are and the language they speak.” 

“We must challenge this lack of global equity and ensure that platforms urgently invest in protecting their users’ safety.” - Alexandra Pardal, Campaigns Director at Digital Action

Platforms and governments must act now

This investigation comes at a crucial time, with South Africa one of the 65+ countries due to go to the polls in 2024 in the biggest global election year so far this century. The preservation of press freedom is essential to uphold the democratic process during this time. Against this backdrop, we need women journalists to be able to carry out fearless political reporting that is in the public interest and holds governments and politicians to account, without fear of gendered reprisals online and offline.

At the root of this issue are social media corporations’ highly profitable business models that are designed to maximise engagement, often at the expense of safety. To tackle this issue at a systemic level, they must put people before profits. We need to build safety-by-design into the platforms along with balanced regulation grounded in human rights, protected from state overreach. Looking ahead to the 2024 elections, social media corporations must act now to invest in content moderation practices and implement measures that safeguard women’s rights, media freedom and global democracy.

In response to Global Witness’ investigation, a Meta spokesperson said: "These ads violate our policies and have been removed. Despite our ongoing investments, we know that there will be examples of things we miss or we take down in error, as both machines and people make mistakes. That's why ads can be reviewed multiple times, including once they go live.”

A TikTok spokesperson said that hate has no place on TikTok and that their policies prohibit hate speech. They said that their auto-moderation technology correctly flagged the submitted advertisements as potentially violating their policies but a second review, by a human moderator, incorrectly overrode the decision. They said that their content moderators speak English, Afrikaans, Xhosa, and Zulu and that they are investing in Trust and Safety globally, including expanding operations for the Africa-based TikTok community [2]. 

Google and X/Twitter were approached for comment but did not respond.


[1] Unlike other platforms where an approval process changes the status of adverts prior to publication, the status of the test adverts on X/Twitter remained ‘scheduled’. To understand this further, we uploaded three new test adverts that were non-controversial (they violated the platform’s quality policy with blatant errors in spelling, grammar, and image quality) but unlike the ’scheduled’ hate speech test adverts, we published them. This appeared to trigger an approval process that caused two of the ‘scheduled’ hate speech test adverts to be ‘halted’. Even though the new test adverts violated the platforms’ quality policy, they were all published without being ‘halted’ or rejected and ran for 24 hours before we removed them.

[2] The full response from TikTok was: “This content should not have been approved for our platform. Our advertising policies and Community Guidelines prohibit ad content that contains hate speech or hateful behavior, and this includes content that attacks a person or group because of protected attributes, such as gender and gender identity. Our auto-moderation technology correctly flagged the submitted advertisements as potentially violative of our policies. A second review, by a human moderator, incorrectly overrode the decision. Errors like this are the exception and we are continually refining our moderation systems and improving our moderator training.

Our moderation experts speak more than 70 languages and dialects, including English, Afrikaans, Xhosa, and Zulu. As our Africa-based TikTok community has grown, we have likewise expanded our safety operations. This includes investing in more capacity and increasing local language support. We are taking aggressive steps against persistent bad actors and as part of our improved model detection, we are using network signals to find emerging trends and identify new violating accounts. As a result, users are seeing less violative content on the platform.

TikTok has more than 40,000 dedicated and talented professionals working to keep the platform safe for users. We recognize that we won't catch every piece of violative content, however, we continue to grow our teams and invest in our Trust and Safety operations globally. Feedback from NGOs like Global Witness is a valuable part of how we continue to make TikTok safer. We appreciate you reaching out and the opportunity to share our findings.”