The same technology that is leading to a proliferation of digital advertising fraud is helping Google detect fraud and remove bad actors at a higher volume.
Google doubled the number of advertiser accounts it blocked or removed in 2023 versus the prior year, as advancements in generative AI triggered a rise in fraud and scams while also improving the tech company’s detection systems.
In its 2023 Ads Safety report, released Wednesday, Google said it suspended 12.7 million advertiser accounts in 2023 for violating its policies, up from 6.7 million in 2022.
The volume of publisher sites it took action against also ramped up significantly, with more than 2.1 billion pages blocked or restricted from serving ads and 395,000 sites hit with broader enforcement action — up from 1.5 billion and 143,000 in 2022, respectively.
Google attributes the surge to improvements in its enforcement efforts spurred by its adoption of large language models (LLMs).
These models, such as Google’s Gemini, are able to read and interpret much higher volumes of data at greater speeds than traditional machine learning models. They also learn quickly, requiring less examples to be fed into them before being able to identify similar techniques.
“These advanced reasoning capabilities have already resulted in larger scale and more precise enforcement decisions on complex policies,” said Duncan Lennox, VP and general manager of ads privacy and safety at Google, during a press briefing earlier this week.
Google said machine learning, including its latest LLMs, triggered more than 90% of its page-level enforcement actions in 2023. LLMs were directly responsible for blocking or removing 35 million ads in financial services, sexual content, misrepresentation and gambling, the first categories where the new models were rolled out, the company said.
A total of 5.5 billion ads were removed from Google in 2023, slightly higher than 2022’s 5.2 billion.
Yet the same technology has led to the development of tools that make more sophisticated fraud and scams easier to deploy.
“There’s no question that the introduction of readily available AI-generated video tools have exacerbated the prevalence of deepfake scam ads,” said Alejandro Borgia, director of ads privacy and safety at Google.
Since the end of 2023, he said Google has witnessed a growing number of deepfake ads promoting fake celebrity endorsements that are used to scam people into purchasing faulty or deceptive products or clicking on harmful links. Google updated its misrepresentation policy in March to ban these tactics in ads.
In fact, novel threats caused Google to update its ads and publisher policies 31 times in 2023.
“2023 introduced a lot of new challenges from the introduction of innovative technology like generative AI, to global conflicts, to new scams that threatened our users. And the digital advertising space has to be nimble and ready to react,” said Lennox.
Deepfakes are not only concerning as a tool to conduct scams — in the biggest election year in history, there are concerns deepfakes will be used to peddle false political narratives.
Borgia said Google is “already seeing” election advertisers use generative AI tools within their campaigns, including to generate high-quality videos.
He shared an optimistic view about how these technologies “enable them [political advertisers] to create better ads,” rather than directly addressing the democratic risks.
Google rolled out a new policy in November requiring verified election advertisers to prominently disclose when their ads contain synthetic content depicting realistic-looking people or events. Using AI to change ads — such as to resize images, correct colors or edit backgrounds in ways that don't create realistic depictions of actual events — are exempt from these disclosure requirements.
Borgia said election advertisers “are generally following the disclosure requirements.” Aware that self disclosures only go so far, he said the company also proactively scans for cases of manipulated media in political ads using both automated systems and manually.
Google began adding invisible watermarks to images and audio generated by its own models in August last year to help identify synthetic media. The tech giant joined the Coalition for Content Provenance and Authenticity, or C2PA, which is developing an open technical standard for AI labeling, in February.