Facebook’s self-proclaimed efforts to safeguard election integrity in Brazil appear to be failing, as our new investigation uncovers.  Our investigation finds that Facebook appallingly failed to detect election-related disinformation in ads  in the run-up to the Brazilian election on 2 October 2022. Tensions are high in Brazil, with the growing threat of disinformation – that undermines the election results and could lead to violence – overshadowing the campaigning period.

This follows a similar pattern we uncovered in Myanmar, Ethiopia, and Kenya about Facebook’s inability to detect hate speech in volatile political environments – but what’s different this time is the ad content contained outright false election information (such as the wrong election date) and information designed to discredit the electoral process, and therefore undermining election integrity. 

Facebook touts its election integrity efforts, claiming to have “advanced security operations to take down manipulation campaigns and identify emerging threats”, but our findings are a stark reminder of how easy it is for bad actors to circumvent their measures. Given the high stakes nature of the Brazilian election, Facebook is failing in its efforts to adequately protect Brazilians from a disinformation nightmare.
We are focused on providing reliable election information while combating misinformation across languages. - Meta, 2022

Our investigation 

We tested Facebook’s ability to detect election-related disinformation ahead of the election, using examples we sourced from a mix of real-life posts and examples that had been highlighted in 2021 by Brazil’s highest electoral authority, the Superior Electoral Court (TSE). 

In total we submitted ten Brazilian Portuguese-language ads to Facebook – five containing false election information and five aiming to delegitimise the electoral process. In conducting the investigation in Brazil, named as one of Facebook’s priority countries for elections, we were able to test whether Facebook was able to detect outright election disinformation as well as it suggests it can.

Alarmingly, all of the election disinformation examples were approved.

Initially, one of the ads we submitted was rejected under Facebook’s “Ads about social issues, elections, or politics” policy. But just six days later, without any intervention from Global Witness, the ad was approved without explanation. This bizarre sequence of decisions from Facebook seriously calls into question the integrity of its content moderation systems.

All of the ads we submitted violate Meta’s election ad policies. The content clearly contained incorrect information that could stop people from voting – such as the false information about when and where to vote – methods of voting (e.g. voting by mail), and importantly, delegitimised methods of voting that questioned the integrity of Brazil’s electronic voting machines. 

Critically, we didn’t verify the account that we used to place the ads using the “ad authorisations” process – also violating Meta’s policies on who is allowed to place ads containing political ads. This is a safeguard that Meta has in place to prevent election interference, but we were easily able to bypass this. 

When placing the ads, we were not physically located in Brazil nor did we use a VPN to mask our location – in this instance we posted the ads from Nairobi and London, which should have raised flags given the content of our ads. We were not required to put a “paid for by” disclaimer on our ads, as would be required for political ads. Moreover, we did not use a Brazilian payment method to pay for the ads. All of which raises serious concerns about the potential for foreign election interference and Facebook’s inability to pick up on red flags and clear warning signs.  

We submitted the election disinformation in the form of ads, as this enables us to remove them before they go live, while still being reviewed by Facebook and undergoing Facebook’s content moderation process. According to Facebook, this often includes proactive review using automated and manual tools. Facebook lauds its own system for reviewing ads with advertisers being held to an ‘even stricter’ standard.

A Meta spokesperson said in response to our findings that they “are and have been deeply committed to protecting election integrity in Brazil and around the world”. They said that they have prepared extensively for the upcoming election in Brazil including launching tools to label election-related posts and establishing a direct channel for the Superior Electoral Court to send them potentially harmful content for review. They cited figures for the number of posts they removed in the last election for violating their policies. Their full response is included in the endnote.


Brazilian President Jair Bolsonaro has used Facebook extensively in his campaigning. Facebook makes up nearly half of social media use in Brazil. REUTERS/Sergio Moraes

Why this matters

Brazilians take to the polls on Sunday 2 October 2022 to elect their President – the first general election since Jair Bolsonaro has taken power.

The choices of the world's major tech companies have had a big impact online before and after high-stakes elections around the world, and all eyes are on Brazil this year. Disinformation featured heavily in its 2018 election, and this year’s election is already marred by reports of widespread disinformation, spread from the very top: Bolsonaro is already seeding doubt about the legitimacy of the election result, leading to fears of a United States-inspired January 6 “stop the steal” style coup attempt.

And up for election this year is the climate. Bolsonaro’s record on climate has been described as inadequate and his climate commitments as “lip service”. There is very little transparency into just how much social media platforms are fuelling the disinformation problem in Brazil, but what researchers have been able to identify is that social media ‘filter bubbles’ are fuelling climate denial messaging and hate towards climate activists – while also pushing messages that undermine Brazilians’ trust in its democratic systems.  

Online election disinformation in Brazil

Disinformation in high-stakes elections, particularly on social media, has been highlighted with examples stemming from 2016’s Brexit referendum and the 2016 US election through to today. In Brazil’s 2018 elections, the Superior Electoral Court (TSE) – Brazil’s highest election authority – became the target of disinformation campaigns that aimed to undermine confidence in its electronic voting system (a system that has been in place since 1996). Since then, there have been ongoing campaigns attempting to delegitimise the electoral process in Brazil. 

In 2019, Brazil set up the Program to Counter Disinformation, and investigated disinformation during the 2020 elections and launched several trial initiatives. The program was made permanent to secure the 2022 Presidential Election and beyond. It launched its Counter Disinformation Program, in which the TSE has maintained an open dialogue with social media platforms, and specifically highlighted common examples of election-related disinformation.

Facebook is the most used social media platform in Brazil, making up  48.56% of all social media visits.
Brazilian voting machine

Disinformation aiming to delegitimise methods of voting, such as Brazil's electronic voting machines – in use since 1996 – is banned under Facebook's community guidelines. All ads placed by Global Witness aiming to delegitimise the election result were approved. REUTERS/Paulo Whitaker (BRAZIL)

What needs to change   

It’s clear that Facebook’s election integrity measures are simply ineffective. Meta must recognise that protecting democracy is not optional: it’s part of the cost of doing business.   

Our findings in Myanmar, Ethiopia and Kenya show that Facebook’s content moderation efforts are seriously lacking – now reinforced in Brazil, where the bar for advertising with explicit political content is ostensibly even higher. This follows reports from employees that Zuckerberg is no longer prioritising safeguarding elections, instead focusing on the so-called ‘metaverse’ - Meta’s new frontier of growth.  

Our findings also suggest that Facebook’s account authorisations process – a compulsory measure for anybody wanting to post political or social issue ads – is opt-in, and easily circumvented. This means that Facebook’s own ad library, its “most comprehensive ads transparency surface”, does not give full transparency into who is running ads, who was targeted, how much was spent, and how many impressions the ads received. This information is vital so researchers, journalists, and policy makers can investigate what’s going on and suggest interventions to help protect democratic systems. 

While the EU is taking a lead globally to regulate Big Tech companies and force meaningful oversight, platforms should also be acting of their volition to protect their users fully and equally.

We are committed to securing our platforms and providing transparency during elections - Meta, 2022

We call on Facebook to: 

  • Urgently increase the content moderation capabilities and integrity systems deployed to mitigate risk before, during and after the upcoming Brazilian election – and ensure that the moderators understand the appropriate cultural context and nuance of Brazilian politics. 
  • Immediately strengthen its ad account verification process to better identify accounts posting content that undermines election integrity.
  • Properly resource content moderation in all the countries in which they operate around the world, including providing paying content moderators a fair wage, allowing them to unionise and providing psychological support. 
  • Routinely assess, mitigate and publish the risks that their services impact on people’s human rights and other societal level harms in all countries in which they operate. 
  • Publish information on what steps they’ve taken in each country and for each language to ensure election integrity.
  • Include full details of all ads (including intended target audience, actual audience, ad spend, and ad buyer) in your ad library.
  • Allow verified independent third party auditing so that Meta can be held accountable for what they say they are doing.
  • Publish their pre-election risk assessment for Brazil.
  • Respond to the 90+ Brazilian civil society organisations’ policy recommendations in their report The Role Of Digital Platforms In Protecting Electoral Integrity In The 2022 Brazilian Election.


The ads were in the form of an image (text on a plain background) and were not labelled as being political in nature.

Researchers interested in knowing the exact wording of the political ad examples we used are welcome to request this from us by writing to [email protected]

Facebook's full response to our opportunity to comment letter:
We cannot comment on these findings as we don't have access to the full report. However, we prepared extensively for the 2022 election in Brazil. We’ve launched tools that promote reliable information and label election-related posts, established a direct channel for the Superior Electoral Court to send us potentially-harmful content for review, and continue closely collaborating with Brazilian authorities and researchers. Our efforts in Brazil’s previous election resulted in the removal of 140,000 posts from Facebook and Instagram for violating our election interference policies and 250,000 rejections of unauthorized political ads. We are and have been deeply committed to protecting election integrity in Brazil and around the world.