Our new investigation reveals Facebook approved incendiary ads targeted across the sectarian divide in Northern Ireland.

When political advertising was limited to the TV and billboards, everyone got the chance to see what political parties were saying. That meant that opponents had the opportunity to counter their claims, helping hold them to account, and voters heard the same messages as each other. That’s not the case today.

Instead, digital tech platforms now harvest our data from the things we say, like and follow on their own platforms, the places we visit elsewhere on the internet and even the places we visit offline. They use this data to segment us into small categories and sell those profiles to advertisers, all without us being aware of what’s going on or how we’ve been categorised, and without our explicit consent.

That means that a political group or foreign government that wants to sow division or hatred in our society now has the tools to find a sympathetic audience and spread their message easily and cheaply. One of the ways they can do this is by targeted advertising. That means that you’re probably unaware of what political messages others are being shown. That’s dangerous to our democracies.  

These aren’t just theoretical risks - they’re threats that we already see playing out around the world, from ads designed to suppress votes to those aimed at recruiting militia members

Given that Facebook keeps almost all the information about how ads have been targeted hidden from view, we set about testing the extent to which Facebook allows political adverts that are targeted in a divisive way. 

We did this by submitting political ads to Facebook and recording which ones they approved for publication. The ads were targeted in a variety of polarising ways and included content that breached Facebook’s rules on hate speech and inciting violence. Of course, we didn’t actually publish any of the ads. We set a publication date in the future and deleted the ads once we received the notification from Facebook as to whether they were approved for publication or not. 

Facebook says that before adverts are permitted to appear online, they’re reviewed to make sure that they meet their advertising policies, and that during this process they check the advert's “images, video, text and targeting information, as well as an ad's associated landing page”. The process relies primarily on automated tools, though Facebook reveals little about how it’s done in practice. We flagged the ads as being “political” as required by Facebook. 

We proceeded cautiously, imagining that the inflammatory ads would surely be rejected and our Facebook accounts shut down. But in fact, every single ad we thought up was accepted for publication, often within hours. 

So what were the ads and the targeting that we used? We looked at the potential for stoking divisions and inciting violence along sectarian (Protestant/Catholic) lines in Northern Ireland. When we were devising these ads, tensions in Northern Ireland were increasing, making it a good context in which to test the extent to which Facebook would allow ads that are targeted in a polarising way. 

In fact, not long after Facebook accepted our ads, violence broke out on the streets with masked youths rioting and a bus hijacked and set on fire. We’re not suggesting that religiously-targeted ads contributed to these tensions; we’re demonstrating the harm that could be caused when political ads are targeted to narrow groups. This sort of material has the potential to further inflame tensions and lead to real-world violence, not just in Northern Ireland, but anywhere our differences can be exploited by those who wish to divide us. 

Facebook says that during its ad review process one of the things it checks is how an ad is targeted. Yet they allowed us to target inflammatory political ads across the sectarian divide by: 

  • Targeting people in Northern Ireland that Facebook has profiled as having an interest in Protestantism.
  • Targeting people in Northern Ireland that Facebook has profiled as having an interest in the Catholic Church.
  • Targeting people living on the predominantly Catholic Falls Road side of the peace wall in west Belfast by using postcode targeting.
  • Targeting people living on the predominantly Protestant Shankill Road side of the peace wall in west Belfast by using postcode targeting. 

In the wrong hands, there’s a lot of damage that can be done by ads targeted in this kind of way - they’re perfect for inflaming tensions.  

Below, we describe the ads that we submitted and the ways that they were targeted.

Divisive ads targeted across the sectarian divide

We created ads that we deliberately designed to be divisive. One said ‘Northern Ireland is for the British - join the cause’ and was targeted at people who Facebook had profiled as having an interest in Protestantism. The other said ‘They’ll never leave the North of Ireland unless we make them’, and was targeted at people who Facebook had profiled as having an interest in the Catholic Church. What these ads are effectively calling for is exclusion or segregation on the basis of religious affiliation. We therefore believe that they breach Facebook’s community standards on hate speech. [1] Facebook accepted both ads for publication.

Ads containing hate speech targeted across the sectarian divide

Next we devised two political ads that went further. This time, our ads expressed contempt for and inferiority of either Protestants or Catholics and contained offensive sectarian slurs. Both ads violate Facebook’s community standards on hate speech which ban offensive slurs used to describe people’s religious affiliation. [2] 

As before, we targeted the ads to people categorised as being interested in Protestantism or the Catholic Church. Facebook gave us permission to run both ads.

Ads inciting violence targeted across the sectarian divide

We followed this up with an ad encouraging people to take to the streets because ‘voting hasn’t worked’. To make it clear that we meant riots and violence not peaceful protest, we added a picture of a burnt out car. This ad violates Facebook’s community standards on violence and incitement. [3] 

We targeted the ad in two ways - the same way as above, using the groups Facebook has categorised us into, and also to people living either side of the main peace wall in west Belfast, on the predominantly Catholic Falls Road side or the predominantly Protestant Shankill Road side - in fact, to the area where the violence and rioting spilled over the peace wall in early April 2021. Of course, we don’t think that everyone living in these areas would be susceptible to our ad, but if you are looking to stoke divisions then targeting in this narrow geographic way is a useful tool. Facebook accepted the ads for publication.

Voting hasn't worked.png

This ad was targeted to people in Northern Ireland who Facebook has profiled as having an interest in Protestantism or the Catholic Church as well as to people living either side of the peace wall in west Belfast.

We put the allegations raised in this article to Facebook to give them the opportunity to respond. A Facebook Company Spokesperson said “Several of these adverts violate our policies against hate speech and incitement of violence and have since been removed. Our enforcement is not perfect, but we’re always working to strengthen and improve our processes.” To be clear, it was Global Witness who deleted the ads before they had a chance to be published, not Facebook. Facebook pointed out that adverts may be reviewed after they go live and always when a user reports material as not complying with Facebook's policies. Reviews may be by people or computers, Facebook said. 

The spokesperson also said that “People's interests are based on their activity on Facebook -- such as the pages they like and the ads they click on -- not their personal attributes." We disagree. The law puts special protections on how companies can process data that reveals our most personal attributes such as our religious beliefs. Facebook is attempting to wriggle out of this obligation by claiming that people’s interest in a topic such as Protestantism or the Catholic Church does not reveal anything about their religious views. 

What we’ve shown here won’t be solved just by employing more and more content moderators, useful though that might be. The problem runs deeper than that.  

It’s the targeting tools and surveillance that are the problem. Facebook gets to know what we post and like and share and who we’re friends with on Facebook. And it also gets fed information on many of the websites we visit elsewhere on the internet. It uses this information to sort us and categorise us into profile groups. Sometimes the profiling is based on what they observe us doing or saying, other times it’s based on predictions (sometimes accurate, sometimes probably not) of who they think we are. This happens for profit and often without our explicit consent or understanding of how our data could be used (and it’s not just Facebook that does this but other big tech companies too).  

For as long as Facebook’s business model is selling our profiles to advertisers, based on deeply personal predictions about us such as our religious views, the system will be open to abuse by those who wish to polarise us.  

Self-regulation clearly isn’t working - every single one of our ads that breached Facebook’s policies were approved despite their nicely-worded policies banning them. We need legislation to change the system. 

We need transparency. Everyone should be able to see key information about ads, including who is behind them, how much was spent, and, crucially, how they were targeted, with the same level of detail that advertisers get to choose from. 

And we need to end advertising that relies on surveillance, including those that rely on inferences about our beliefs and interests. Tracking our every click online in order to profile us isn’t just the creepy bit of social media, it’s the central part of the online platforms’ business models. 

In the case we’ve looked at here, selling advertisers the ability to target people along religious lines in a place where religious divisions underlie a long-running conflict and fragile peace is playing with fire. More generally, selling our profiles to advertisers is polarising us, encouraging extremism and is downright dangerous to democracy.  

This summer, UK and EU legislators are scrutinising bills to regulate digital services and safeguard internet users. These bills will fail to protect users from harm and fail to defend our democracies if they do not ban surveillance advertising or bring targeted advertising out of the darkness


[1] Facebook’s community standard on hate speech says: “we don't allow hate speech on Facebook ...We define hate speech as a direct attack against people on the basis of what we call protected characteristics: [which include] [...] religious affiliation [...]. We define attacks as [...] dehumanising speech, [...] statements of inferiority, expressions of contempt, [...] calls for exclusion [...].” 

[2] Facebook’s community standard on hate speech says: “we don't allow hate speech on Facebook ...We define hate speech as a direct attack against people on the basis of what we call protected characteristics: [which include] [...] religious affiliation [...]. We define attacks as [...] dehumanising speech, [...] statements of inferiority, expressions of contempt, [...] calls for exclusion [...].” In addition, Facebook’s community standard on hate speech says you must not post “Content that describes or negatively targets people with slurs, where slurs are defined as words that are inherently offensive and used as insulting labels for the above-listed characteristics [which include religious affiliation]. Facebook’s advertising policies on sensational content prohibits “Ads promoting [...] statements of inferiority, or contempt [...] based on protected characteristics.” https://www.facebook.com/policies/ads/prohibited_content/community_standards https://www.facebook.com/communitystandards/hate_speech https://www.facebook.com/policies/ads/prohibited_content/sensational_content

[3] Facebook’s community standard on violence and incitement says you must not post: “Statements of intent to commit high-severity violence. This includes content where a symbol represents the target and/or includes a visual of an armament or method to represent violence” or “Any content containing statements of intent, calls for action, conditional or aspirational statements, or advocating for violence due to voting, voter registration or the administration or outcome of an election.” https://www.facebook.com/communitystandards/credible_violence