If you are among the half of UK or US adults who get their news through social media, you might be under the impression that scientific consensus surrounding human-made climate change is still a matter of debate. You would be wrong.

A recent study by Cornell University found that over 99.9% of scientists agree about the impact of human activity on climate change, a level of certainty similar to that of the theory of evolution. And yet that is not the reality that is reflected on Facebook.

When we simulated the experience of a climate-sceptic user on the platform, within a few clicks Facebook’s algorithm recommended content that denied the existence of man-made climate warming and attacked measures aimed at mitigating the climate crisis. Much of this content deployed culture war tactics to polarise debate around climate change and demonize environmental movements.

Climate disinformation and misinformation refers to deceptive or misleading content that:

> Undermines the existence or impacts of climate change, the unequivocal human influence on climate change, and the need for corresponding urgent action according to the IPCC scientific consensus and in line with the goals of the Paris Climate Agreement

> Misrepresents scientific data, including by omission or cherry-picking, in order to erode trust in climate science, climate-focused institutions, experts, and solutions; or

> Falsely publicises efforts as supportive of climate goals that in fact contribute to climate warming or contravene the scientific consensus on mitigation or adaptation.

This definition was developed by Climate Action Against Disinformation.

In April of last year, Facebook CEO Mark Zuckerberg admitted during congressional testimony that climate disinformation is ‘a big issue’ on the platform. In an effort to combat this, Facebook created its ‘climate science centre’; an information hub designed to ‘connect people with science-based information on climate change’.

Facebook also expanded their flagging of posts regarding climate with information labels linking to their climate science centre and announced a one-million-dollar grant program to support organisations working to combat climate misinformation – a sum akin to about thirty minutes of company profits.

Our investigation found that despite the platform’s promises to mitigate climate disinformation, Facebook’s algorithm continues to recommend pages with content promoting theories that:

  • The climate crisis is a hoax
  • Rising temperatures are part of natural cycles
  • Environmentalists are alarmists
  • Climate scientists are biased
  • Warming models are inaccurate
  • Mitigation solutions won’t work or are otherwise bad for society

This disinformation has consequences. 

Research shows that climate disinformation is a primary contributor to public polarization over the climate crisis and that it shapes public attitudes toward climate science. Individuals who are exposed to this kind of disinformation are less likely to support mitigation policies, hindering the ability of policymakers to take meaningful climate action.

To summon the swift and decisive measures needed to protect against the worst effects of climate change, it is imperative that the unity and shared understanding exhibited by the 99.9% consensus extends beyond the scientific community. We need agreement on the facts, politicians who are moveable on the issue, citizens that are informed by accessible and reliable climate information, and solidarity within and between countries.

Enter Facebook.

By design Facebook encourages users to maximize the time they spend on the platform by incentivizing them to keep scrolling, liking, and sharing content. The motive that drives this design is profit: the longer a user stays on Facebook, the more ads they can show that user, thus generating more revenue. All the while, Facebook is able to amass ever more specific data-points about that user and use it to sell better targeted advertising.

But it’s not just the ads that are tailored to your profile. Based on your likes, characteristics, behaviours, and demographics, Facebook’s algorithm will serve you content that it thinks you will enjoy, (to keep you on the platform) and thus the circle continues.

The result is that Facebook drives individuals into filter bubbles where they are served information that affirms their world view, exploiting the psychological principle that people prefer to consume content that aligns with their existing belief systems. The information Facebook delivers to two users with differing beliefs on the same topic may therefore look nothing alike.

Knowing this, and also that Facebook’s algorithm rewards content that is divisive, outrageous and polarizing, we set out to test whether the company’s stated commitment to mitigating the proliferation of climate lies matched the experience a user who doubted climate science might have on the platform.

Jane falls down the rabbit hole

To test this, we first created a grading system to categorize different types of climate content (see table below). The system builds on the most recent climate communications research by leading academics. 

A and B represent various degrees of climate denial which rejects the existence of climate change or that human activity is causing it.

C, D, and E can be described as ‘distract and delay tactics’: they don’t deny the existence of climate change, but rather denounce solutions to address it or portray scientists and environmentalists as biased and alarmist.

F, G, and H represent greenwashing, accurate climate content, and other climate conspiracies respectively.

Once we had our grading system in place, we created a brand-new Facebook account and directed our user, let’s call her Jane, to ‘like’ the page Global Warming Policy Forum (which recently changed its name to ‘Net Zero Watch’).

The page belongs to an organisation of the same name which is heavily involved in campaigning against net zero efforts and boasts a board of trustees including UK Member of Parliament Steve Baker. The organisation’s Facebook page has over 14,000 followers and regularly publishes content attacking policies aimed at reducing carbon emissions.

Immediately after Jane liked the page, a pop up of recommendations directed her to other pages that Facebook’s algorithm believed she would enjoy. We took a look at the first three climate related pages and using our categorization system, graded the first nine pieces of content on each page: all but one of the recommendations was dedicated to climate disinformation.

We decided to take the simulation a step further and directed Jane to ‘like’ the first recommendation the algorithm had offered her, a page called ‘Climate Depot’. Climate Depot is an outlet run by Marc Morano who heads communications for the non-profit CFACT, a group which rejects the scientific consensus around man-made climate change calling it a ‘myth’.

The page, which has 6,784 followers posts statements like “The IPCC’s shrill warnings of doom are based on the most extreme, yet most unlikely climate models” and a warning about how President Biden’s climate plans will cause unprecedented damage to America’s individual freedoms that finished with the statement “forewarned is forearmed”.

When Jane ‘liked’ Climate Depot, she was again presented with a menu of recommendations that Facebook’s algorithm thought, based on her existing likes, she would find engaging.

Of the 27 pieces of content that we graded across the first three recommended climate pages, 100% of it was climate disinformation that either fell into ‘distract and delay’ or denial categories.

Eager to see if this was just a one-off, we replicated this test with the same account to see what kinds of climate content would be amplified to Jane, whose online behaviour thus far demonstrated an interest in climate disinformation. Using two different anti-climate pages as our starting points, we followed the pages recommended to Jane to understand at what point, if at all, Facebook’s algorithm would intercept the bad information with reputable climate science information.

In total we traced Jane’s trajectory from three starter pages leading to eighteen recommended pages and graded 189 pieces of content. The results?

Of the 18 pages recommended to Jane, only one did not contain any climate disinformation. 12 of them only included climate disinformation. Of the content we analysed, only 22% of climate disinformation posts possessed a climate science centre flag. As a sub-category, only 34% of climate denial content possessed a flag.

Moreover, while Jane was sporadically and infrequently being referred to the centre, she was being actively encouraged to follow and like pages that almost exclusively espoused climate disinformation. Despite Facebook’s announced expansion of its flagging system, when they responded to our investigation, they told us that “for several months after we announced the initial experiment of informational labels in the UK, we did not completely roll out our labelling program.”

A hazardous information environment

Our investigation indicates that climate-sceptic users are not only directed to more disinformation affirming their beliefs and finding community amongst others who are disdainful of climate science. Often our user was directed to worse information, so that what began on a page full of distract and delay narratives, ended on pages espousing outright climate denial and conspiracy. 

Indeed, amongst the pages Jane was recommended were those dedicated to conspiracies like chem trails which claim that condensation left by planes contains chemical agents that control the weather.  This is alarming because what Facebook’s algorithm appears to correctly assume is that a person who already believes one conspiracy theory (say, that the climate crisis is a hoax) is more likely to believe others.

Our findings suggest that Facebook could well be radicalising users who, once they arrive on anti-climate science pages are driven to more extreme climate disinformation.

To contrast what the Facebook experience would be for a user who already believed in, and cared about the climate crisis, we created another Facebook account and directed our user, let’s call him John, to ‘like’ the IPCC’s Facebook page – every page recommended to John by the algorithm encouraged him to engage with more reliable climate science content. The split-screen realities between Jane and John’s experience on the very same platform shows the radicalising effect of Big Tech. Facebook’s algorithm is ultimately ensuring that the people most in need of good information are the ones least likely to get it. 

This information environment is dangerous.

While it may only be a small subset of users who are engaging with climate change conspiracy and denial, recent events point to the fact that even obscure narratives which begin on the corners of social media have the ability to spill over into our physical realities and shift political discourse.

The QAnon movement, which believes that a global liberal elite run child sex rings that only Donald Trump can stop, first came into existence on the fringes of the internet in 2017.  Today, in 2022 over 40 candidates that expressed some public support for the QAnon conspiracy are running for US national office. Supporters of the conspiracy also played an important role in the January 6th Capitol insurrection, demonstrating the ability of online narratives to become real-world violence.

In her testimony to UK parliament, Facebook whistle-blower Frances Haugen explained that Facebook’s recommendation systems not only “amplifies divisive, polarizing, extreme content” but that this kind of content “gets hyper-concentrated in 5% of the population.” Haugen went on to say “you only need 3% of the population on the streets to have a revolution, and that’s dangerous.”

So, while the amplification of climate disinformation may not land with a large portion of the electorate, it gives the doubters and opportunists permission to debate the proven reality of the climate crisis, weakening support for climate action and wasting time we don’t have.

Saving our planet should not be a partisan issue, and yet increasingly narratives around climate change are being tangled up in culture wars and party politics that divide us and delay action. This is in part because platforms like Facebook give a small subset of climate deniers with the aim of sowing doubt about anthropogenic climate change and the efficacy of solutions to tackle it, outsized voice, reach, and power.

Recommendations

Facebook says that they want to connect people with better information around climate change. They also prioritize engagement at all costs and incentivize users to stay online through algorithms that facilitate the creation of filter bubbles and push users down rabbit holes. These goals are entirely at odds, and Facebook has shown that when weighing the health of users against their bottom line they will choose to profit time and again. Self-regulation is not working.

Governments must step in and legislate against the power of Big Tech to shape our realities in dangerous and divisive ways that threaten to derail progress toward tackling the greatest challenge our planet collectively faces.

The European Union is making strides toward this through their flagship legislation the Digital Services Act (DSA). Once passed the DSA will demand transparency and accountability mechanisms such as requiring platforms to assess and mitigate the risks that they pose to fundamental rights. It will also introduce requirements for platforms to open up their platforms to scrutiny by independent auditors and researchers to increase transparency. 

Our recommendations to governments:

  • The European Union ensures that the DSA’s platform audits and risk assessment processes are comprehensive and routinely include climate disinformation.
  • Governments elsewhere in the world - notably the United States - follow the lead of the European Union and legislate to regulate Big Tech companies, including through audits, risk-assessments and algorithmic transparency requirements.

Our recommendations to Facebook and other social media companies:

  • Facebook and all social media companies produce transparent, public-facing plans to meaningfully reduce the spread of climate disinformation on their platforms.
  • Facebook and other social media platforms monitor and report on climate disinformation.


Responses

We put our findings to Facebook to give them an opportunity to comment. A Meta representative responded that they take their “responsibility seriously as a platform to connect people to authoritative and accurate information and as a company that is passionate about climate action.” The company further stated, “Our systems are designed to reduce misinformation, including false and misleading climate content, not to amplify it.” The company emphasized their factchecking efforts including human and artificial review. Facebook acknowledged that the goal of reducing misinformation required ongoing investment in "the systems and partnerships to address this society-wide challenge".

Methodology

We selected three ‘seed’ or starter pages that were climate-science sceptical and that had large followings. We liked each page and recorded the first three climate related pages recommended to us by the Facebook algorithm. ‘Climate related’ was defined as any page that had nine pieces of content about the climate in the first fifty posts. Ignoring pinned posts, we graded the first nine pieces of content on each page according to our categorisation system. Where a piece of content fell into both denial and distract and delay categories, we recorded the more extreme view. We repeated this process by ‘liking’ each of the first climate related posts recommended to us after liking the initial seed pages. Posts that were not climate related were disregarded.