A week after Meta announced that it would end fact-checking on its platforms, parts of adland were quick to respond with hostility towards the change, prompting a re-evaluation of Meta as a media channel.
Meg Anderson, senior social strategist at Bartle Bogle Hegarty, said that the move was “frightening” for users and brands. “Less biased censorship and inexplicable shadow bans has its appeal, but at what cost? There are huge brand safety concerns of showing up in an X-like environment where your organic content may sit beside hateful commentary."
After Mark Zuckerberg's announcement on 7 January, the change in Meta’s content moderation practices meant that posts claiming that gay and trans people are "mentally ill" and referring to women as "household objects" would be allowed unless flagged by other users.
Anderson added: “The new hateful conduct policy also puts a huge risk on the safety of queer creators. Brands working with these creators won’t be able to ensure that the additional exposure from partnerships won’t lead to free hate speech. The notion that online hate leads to real-life harm is a very true and tragic reality in the queer community. Brands need to stand by their LGBTQ+ creators and stand up for their brand values when the platform fails to protect them.”
Three days after the announcement, on 10 January, Meta terminated its DEI programmes.
Outvertising, a LGBTQ+ advocacy group for advertisers, has shared a survey asking the industry how the change will affect trust on the platforms and if it is expected that clients would shift media spend.
Will media spend move to other platforms?
Chief executive of media agency Bountiful Cow, Adam Foley, said that social media networks “have never been particularly good” at fact-checking and so, on one hand, welcomed the decision.
“As an industry, we spend far too much money on social media,” he said. “This is a moment for us to pause and reconsider that investment.”
Foley explained the impact of politics on tech companies: “Opaque, human-led editorial decisions made by left-leaning Silicon Valley staff have often seen conservative voices quietened via shadow-banning or silenced altogether. This has created a climate of imbalance and mistrust. In US politics, we will now see a strong desire from tech companies to correct this, catch the vibe shift, and bend the knee to Trump.
“There can be little doubt that in the wake of this decision, Meta will flare up with extreme political views from both sides. Advertisers who can stomach this environment will continue to find a great deal of reach available there, although they’ll need to pay to filter out the truly unpalatable stuff.”
Not all marketers are horrified by the change, with some stating that social media platforms should not be “arbiters of truth”, which Zuckerberg suggested, and this should be left to trusted news outlets.
Becky Owen, chief marketing officer at influencer agency Billion Dollar Boy, and former Meta employee, said: “It has never felt fully comfortable for a for-profit organisation to essentially be the arbiter of fact, but to totally absolve itself of any responsibility for information and misinformation, does not seem right either."
Foley added: “If we’re looking for brand-safe environments with high-quality editorial control, we should turn to newsbrands rather than social platforms.”
Immediately after the announcment, Facebook and Instagram no longer had fact-checkers. Instead, the platforms now rely on “Community Notes”, in a similar approach to X, and the community model used by TikTok.
Jamie Barnard, chief executive of Compliant, a tech company specialising in compliance in digital media, does not expect a material pull-back of media spend as a result of the change.
He noted that: “it’s important to decouple content moderation and fact-checking from brand safety."
He added: “Brands can still distance themselves from unsuitable content using existing tools and settings. The issue here is about Meta taking less ownership and control over content moderation and fact-checking, and pushing that on to the community. This will likely result in deeper polarisation, disinformation, manipulation, and exposure to harmful and offensive content.”
SocialChain’s group strategy director, Ric Hayes, agreed that media buyers were “unlikely to abandon” platforms like Meta entirely but may shift their strategies. He said: “Smaller, more secure platforms focusing on trust might draw advertising budgets away from larger platforms.”
Meta remains the largest social platform on the market, commanding a 62.6% share of global adspend last year, according to Warc. TikTok's owner, Bytedance, continues to erode this number and now draws a fifth (20.1%) of all social spend.
X's ad revenue has dropped significantly since Musk's takeover, falling by almost half (46.4%) between 2022 and 2023, from $4.5bn to $2.2bn. Research suggests that this could be due to the lack of content moderation policies, which caused major advertisers including Coca-Cola, Unilever and Mondelez to pull spending over brand safety concerns.
David Wilding, former senior director of planning at Twitter and now strategist at Group M after the absorption of EssenceMediacom X, explained in a post on LinkedIn that community notes predate Elon Musk on X. It was launched by the platform in 2021 and originally called “Bird Watch”.
He said that Bird Watch was intended to work alongside other human moderators and practices to reduce misinformation.
Wilding said human fact-checking is important because “even if a community notes type feature can be trusted to come up with the correct information in aggregate (and research shows that’s a huge if) it simply can’t do it quickly enough before significant damage is done with potentially awful consequences. You need human fact-checkers and curators to do this."
He added that he wondered whether “Meta haven’t really thought about this enough or whether they’re more than aware of the likely consequences but have decided to do it anyway”.
Group M and Wavemaker declined to comment publicly, “given the highly political nature” of the story.
Zuckerberg's decision came in the wake of Donald Trump's re-election as US president, with his inauguration only a week away (20 January). But this isn’t the first time Zuckerberg has alluded to changing his content moderation practices.
During the last presidential election in 2020, he had said that Facebook should not be an “arbiter of truth” and that the company should step away from regulating free speech, following a similar path to that Elon Musk has taken with X.
His explaination for the change last week was that fact checkers “have just been too politically biased, and have destroyed more trust than they've created, especially in the US”.