28 June 2021, London - A series of inflammatory images and language, in adverts targeted at individuals across the sectarian divide in Northern Ireland, was approved by Facebook for publication shortly before the violent riots earlier this year.

Voting hasn't worked.png

This ad was targeted to people in Northern Ireland who Facebook has profiled as having an interest in Protestantism or the Catholic Church as well as to people living either side of the peace wall in west Belfast. Facebook approved this ad for publication.

Going into Facebook’s platform, we submitted political ads for approval with sectarian slurs that potentially encouraged violent protests in a way that violated Facebook’s policies. None of the content was actually made public as we withdrew it once we had confirmed that it had been approved by the social media giant. We specified that these ads would be targeted at people that Facebook has profiled as having an interest in either Catholicism or Protestantism.

In addition, we were able to demonstrate how this content could be narrowly targeted by postcode, specifically at those living either side of the main peace wall in west Belfast, on the predominantly Catholic Falls Road side or the mostly Protestant Shankill Road side - where violence and rioting has recently taken place.

Every single advert tested by us was approved, often within hours, exposing the deep flaws in, and potential impact, of Facebook’s failing review system for ads and the potential dangers of the profiling tools they make available to all advertisers.

Naomi Hirst, Head of the Digital Threats Campaign at Global Witness said:

“We couldn’t believe the ease with which ads that might have incited violence and contained hate speech could be targeted to specific communities in this way. With every post approved we doubled down on how inflammatory we could go and every single time Facebook gave us the green light.” 

“Facebook should need no reminding of the very real dangers of online content fanning pre-existing flames and how this can provoke offline harm. That they were willing to sell us the opportunity to target their users with material which might exploit tensions and foment violence is shocking.” 

“Social media platforms pose as socially conscious companies solely interested in connecting people but our investigation shows how dangerous their tools can be and how lax their efforts to safeguard their users are. Governments urgently need to legislate to rein in Big Tech’s troubling business model of profiling us for profit.” 

Facebook harnesses our personal data to categorise us into highly detailed profile groups. This is not just the information shared with the platform about what we like, who we are, and what we believe in, but also information from our online activity off the site. These profiles are then sold to advertisers, without our explicit consent and often even knowledge, allowing us to be targeted based on our interactions on the platform. 

This is the central tenet of the Facebook business model and is clearly open for abuse. Facebook claims that during its ad review process it checks how the ad is being targeted. Our investigation proves that even in the most extreme and obvious examples, Facebook is failing to live up to that pledge, or its efforts are insufficient.

We are therefore calling for legislation that regulates Big Tech’s surveillance business model. We believe that companies like Facebook should be required to be fully transparent with users over who they are being targeted by, how and how much has been spent. Legislators should ban platforms from profiling users by drawing inferences about their beliefs and interests. 

In response to our investigation, a Facebook Company Spokesperson said: 

“Several of these adverts violate our policies against hate speech and incitement of violence and have since been removed. Our enforcement is not perfect, but we’re always working to strengthen and improve our processes. People's interests are based on their activity on Facebook -- such as the pages they like and the ads they click on -- not their personal attributes." 

They also pointed out that ads may be reviewed both before and after they go live, and whenever a report was raised as to whether it complied with Facebook's policies. 

It was we who deleted the ads before they had a chance to be published, not Facebook.

We disagree that people’s interests are not based on their personal attributes. The law puts special protections on how companies can process data that reveals our most personal attributes such as our religious beliefs. Facebook is attempting to wriggle out of this obligation by claiming that people’s interest in a topic such as Protestantism or the Catholic Church does not reveal anything about their religious views.