Nick Clegg is now in charge of the reputation of Meta, a £500bn business and one of the biggest organisations in the world.
It’s popular in terms of numbers. Meta has 3.6 billion active users on Facebook, WhatsApp, Instagram, and more. But its relationship with the public, and its corporate reputation, is fragile.
The reputational challenge Clegg faces is that social media is extremely difficult, if not impossible, to control.
It's constantly generated by all sorts of people in real-time, a wild, artificial ecosystem that sprouts thousands of pieces of unpredictable content every second. It has no effective filter. It's a brand manager's worst nightmare.
that'll do it...all the #Facebook, uh @Meta #reputation, #culture, and #brand problems are now #fixed https://t.co/2gIIQMD2nR
— Torod Neptune (@torodneptune) February 16, 2022
Facebook and Instagram can be a lot of fun. But controversy follows them around like an ex-lover turned social media stalker. And when those controversies blow up, they reverberate around the world – thanks, ironically, to the power of social media.
Nick Clegg knows all about fragile reputations. He went from ‘Cleggmania’ to ‘Calamity Clegg’ between 2010 and 2015.
The rising star of British politics in the run-up to the 2010 general election, he gave a notable interview to Piers Morgan for GQ in 2008 that presented him as a refreshing alternative to the dull but solid incumbent Prime Minister Gordon Brown.
After that election there was a hung parliament and the Conservatives forged an alliance with Clegg’s Lib Dems to take power. Announcing the news in a press conference, the Conservative leader, David Cameron, was put on the spot when reminded of the time he had been asked his favourite joke, and replied: “Nick Clegg.”
Both men awkwardly laughed it off. But it was a sign of things to come.
Clegg, once famous for his liberal leanings, ended up tainted by association with the Conservatives. His popularity plummeted after he infuriated students by reneging on his manifesto pledge to not increase student tuition fees, and he co-delivered the brutal age of austerity.
Those who had warned the alliance was ill-advised were proved correct. Cameron swept to victory in the 2015 general election, but Clegg's Liberal Democrats were decimated, returning just eight MPs from the 57 they had when the coalition was formed.
It was a chastened Clegg who offered his resignation as Lib Dem leader. In 2017 he lost his seat as an MP. He recuperated in the San Francisco sunshine, moving to Facebook in 2018. In the process, he went from earning £80,000 as deputy PM to a reported £2.7m before his recent promotion to Meta's president of global affairs.
He will earn his pay rise. He may still report to Mark Zuckerberg and Sheryl Sandberg, but officially the trio are on a par.
Clegg has an almighty task ahead. There will always be controversy to attend to.
Some of the criticism may be ill-intentioned and unjust; some of it may be fair. But in the eyes of its critics, Facebook has changed from a cheerful social media playground to a bile-ridden hate-fest with a reputation lower than its popularity with teens.
Instagram was all sunsets and selfies before it developed into wildly-distorted selfies and braggadocious lifestyles. Far more seriously, it has long been linked with exacerbating negative body issues among young people.
WhatsApp is just WhatsApp, and Clegg can rest relatively easily on that front – although it has become notorious for scurrilous messages and leaks, not unlike the one Clegg is currently caught up in, as reported by The Times.
If, as The Times suggests, Clegg did fancy a sneak peek at the UK’s upcoming Online Safety Bill legislation, which is expected to hit parliament in March, it’s no surprise. How to smooth down government officials bristling over the way Meta goes about its business will be top of his agenda.
Facebook has sold ads promoting anti-vaccine messages, comparing the US government's response to Covid-19 to Nazi Germany, casting doubt on the result of the 2020 election, and even pushing political violence. https://t.co/eMQ5f3YOpq
— CNN (@CNN) December 3, 2021
Regulation
When it comes to inappropriate content, Meta inevitably leaves itself open to fake news, hate-speech, antisemitism, far-right and Nazi propganda and more, because it allows anyone to add content to its platforms anytime they like, whenever they want.
The entire set-up is also unregulated, which, as Meta continues to extend its already vast reach, has become one of its biggest problems in the eyes of governments around the world.
As well as what is published on its platforms, and the impact it can have on people, there are also serious concerns about how Meta harvests personal data and how it uses it, how much tax it pays, even how it ships data between the US and Europe. And how it can be used to meddle in political affairs, such as elections.
When it comes to defending its reputation over accusations of publishing dodgy content, Meta points to the Oversight Board (OB) set up by Clegg in 2020.
Meta says this board comprises a group of individuals “carefully chosen for the diversity of their expertise and the quality of their judgement” and exists to independently help the company decide “what to take down, what to leave up, and why”.
It may be well-intentioned, but the trouble is any decision to take content down only starts after it has been published, and only then if it has been flagged up by someone. And after that, a six-stage process begins.
First, an appeal is submitted regarding questionable content. The OB then chooses whether the appeal will be heard. If it will, a panel is assigned the case. It then discusses the content and reaches a decision. After that it shares its verdict with Meta. Finally, Meta executes its decision – “unless doing so could violate the law”.
As it stands, much like the Advertising Standards Authority in the UK, the OB can only react after the content has been aired and complained about. And the viral nature of social media means that once a piece of content is out there, its impact grows as its reach spreads.
It may be straightforward to remove the original post, but it's not nearly so easy to eradicate subsequent and rapid replications, especially on other platforms not operated by Meta.
It’s also self-regulation, which is not the same as regulation. And in 2021, whistleblower Frances Haugen attempted to prove that Facebook doesn’t self-regulate itself, even when it should.
Allegations
Haugen, a former project manager at Facebook, took thousands of internal files and documents when she left her job in May 2021 and handed them to the US Securities and Exchange Commission (which enforces the law against stock market manipulation) and the Wall Street Journal.
That led to a media firestorm featuring damaging allegations over Facebook’s role in affecting teenage mental health, ethnic violence, the spread of misinformation, and more. In her subsequent testimony to the US Congress, Haugen said Facebook had put “astronomical profits before people”.
Mark Zuckerberg responded in a blog, writing: “At the heart of these accusations is this idea that we prioritise profit over safety and wellbeing. That’s just not true.”
A whistleblower can be brushed off as an aggrieved former employee. It’s a little tricker when the outspoken critic is the President of the United States.
In July 2021, Joe Biden was asked by a reporter about “platforms like Facebook” when it comes to combating the spread of misinformation about COVID-19. “They’re killing people,” he replied.
Facebook was annoyed. “We will not be distracted by accusations which aren’t supported by facts,” it said. “The facts show that Facebook is helping save lives. Period.”
Biden backtracked slightly a few days later, but also reinforced the sentiment behind his original quote.
“Facebook isn’t killing people, these 12 people are out there giving misinformation,” he clarified, referring to the so-called ‘Disinformation Dozen’ who apparently produced 65 per cent of the anti-vaxxer content on Facebook that kickstarted the Biden drama.
“Anyone listening to it is getting hurt by it,” added Biden. “It’s killing people. My hope is that Facebook, instead of taking it personally, that somehow I’m saying ‘Facebook is killing people’, that they would do something about the misinformation, the outrageous misinformation about the vaccine.”
Facebook reacted by shutting down or restricting a number of accounts and said it had removed “16 million pieces of content”.
That’s a lot of content. But what should really be of concern for Clegg is that a debate over whether Facebook is killing people existed in the first place.
How did it get to this point? Damning allegations have stacked up since the days of having fun adding your friends and posting daft photos of nights out. Cambridge Analytica, human rights, teen suicide and ethnic cleansing are just some of the unholy messes Facebook and Instagram have been associated with, rightly or wrongly.
Facebook continues to allow activists to incite ethnic massacres in Ethiopia’s escalating war, still letting users post content inciting violence through hate and misinformation. This is despite being aware it helps directly fuel tensionshttps://t.co/JRlWIZhLka
— Alfons López Tena (@alfonslopeztena) February 20, 2022
However baseless any of these associations may be, reputationally it is still a problem for Clegg as he tries to persuade governments around the world that it’s OK for Meta to publish fake news or incendiary content, and that its OB will try to clean the mess up afterwards, although it might leave a stain.
“For every 10,000 bits of content, you would only see five bits of hate speech,” Clegg told CNN six months ago. “I wish we could limit it to zero.”
He said it plaintively, but it’s a tacit admission that Facebook can’t control what it publishes. It’s not that it won’t try, but apparently its existing model, or the algorithms it uses, can’t achieve that level of control. And publishing five pieces of racist content is five too many – unless you’re looking to attract racists.
It’s estimated Facebook publishes about 300 million photos a day, so, using Clegg’s estimate, roughly 150,000 ‘hate’ images could be posted every day. That adds up to 54.8 million a year. That’s a lot for a 40-strong committee to investigate. And that’s just the photos. About half a million comments are posted every minute.
Missteps
Following Clegg’s promotion to president of global affairs, Zuckerberg said Clegg would “lead our company on all policy matters” including “how we interact with governments as they consider adopting new policies and regulations” and “how we make the case publicly for our products and our work”.
If the task ahead is a diplomatic one, and it sounds like it is, Clegg is a solid choice on paper. He’s smart, personable, quick-witted and experienced at the highest level of politics. As a non-techie he’s also able to introduce an objective perspective to how Meta will be perceived outside the tech bubble.
Yes, his political judgement was called into question following his departure from the UK, and mistakes were made. Eyebrows were raised in the UK when Facebook hired him. But given the current state of British politics, those missteps seem relatively trivial now.
It also brings to mind David Cameron’s response after he was caught out with the reminder of his ‘Nick Clegg’ joke in that 2010 press conference. “We’re all going to have things we have said thrown back at us,” he replied. “But we are looking at the bigger picture.”
The big picture has changed a lot since 2010. Now the one Clegg looks at is global, and Cameron ended his career more chastened than Clegg ever was. Cameron also resigned, after a far bigger error of political judgement, by calling the Brexit referendum, and ended up in a mock Shepherds Hut writing an underwhelming autobiography. He was also widely mocked – incuding by Danny Dyer, no less.
Clegg gave an interview in 2017, before his first appointment at Facebook, where he spoke of a desire to clean up and regulate social media. He’s got the job now. The scale of it is immense. And the metaverse is still to come.