Vinita Bhatia
Apr 21, 2025

Ad block party: Can Google’s AI shut the gate early?

The tech major’s new AI-led regime is pushing ad vetting upstream, giving agencies cleaner inventory but demanding sharper compliance and smarter creative.

For creative agencies and brand marketers, Google’s evolving ad safety playbook offers both assurance and a challenge.
For creative agencies and brand marketers, Google’s evolving ad safety playbook offers both assurance and a challenge.

Last week, Google released its 2024 Ads Safety Report, spotlighting how it used a mix of artificial intelligence, policy enforcement, and human oversight to dismantle bad actors in the digital advertising ecosystem. The numbers alone are staggering: 5.1 billion ads blocked or removed, 9.1 billion restricted, and 39.2 million advertiser accounts suspended globally.

The numbers from Google’s 2024 Ads Safety Report underscore the sheer scale of the problem and the evolving sophistication of its solutions—1.3 billion publisher pages blocked or restricted and 220,000 publisher sites subjected to broader enforcement. The company rolled out over 50 LLM enhancements and introduced more than 30 policy updates. It also suspended 700,000 impersonation scam accounts and saw a 90% drop in impersonation scam reports.

This combination of data science and domain expertise represents a shift from reactive moderation to proactive prevention. But beneath the data lies a deeper story—one of proactive detection, localised enforcement, and a technology-led pivot toward pre-emptive safety.

At a time when over two billion people use Google services every day, safeguarding the digital ad ecosystem is no longer just a platform hygiene exercise; it’s a public responsibility. “Our 2024 progress highlights the increasing effectiveness of AI in identifying and blocking fraudulent attempts to enter our ecosystem,” said Alex Rodriguez, general manager—ads safety at Google, in an exclusive Q&A with Campaign.

The new tech arsenal: LLMs on the frontline

The most significant shift in 2024 was Google’s use of advanced large language models (LLMs) to streamline policy enforcement. “We launched over 50 enhancements to our LLMs,” said Rodriguez. These updates didn’t just speed up content evaluations; they also empowered Google’s systems to detect abuse patterns, including scams and impersonations, before a single impression was served.

The difference from previous years is scale and specificity. Earlier machine learning models needed massive datasets to perform adequately. The newer LLMs are built to recognise subtle variations in behaviour and quickly distinguish between legitimate advertisers and malicious actors.

As Rodriguez put it, “Prioritising these technical advancements allows our teams to focus on complex ambiguities, in turn providing LLMs with nuanced training data for future improvements.” In short, AI is not just filtering ads; it’s learning from them.

Cloaking, malware and abuse of the ad network

In another notable shift, Google’s ad safety model is now heavily front-loaded. A growing share of accounts are being suspended at the set-up stage itself, based on fraud signals like manipulated payment credentials, masked IPs, or cloaked content.

“While LLMs have increased our speed and accuracy, people remain an integral part of this process,” Rodriguez maintained. Human reviewers are called in during appeals or when automated systems encounter ambiguity, especially in sensitive categories like healthcare, finance, or political content.

The numbers point to this hybrid model working. In 2024 alone, more than 700,000 advertiser accounts promoting impersonation scams—often with AI-generated deepfakes of public figures—were permanently suspended. This led to a 90% reduction in user reports for such ads year-over-year.

A substantial portion, 793.1 million of the ads removed in 2024, fell under the category of “abusing the ad network.” This category encompasses cloaking (showing different content to users and reviewers), malware distribution, and manipulation tactics designed to bypass review systems.

Rodriguez explained, “This policy category largely covers tactics meant to circumvent the tech company’s review processes, such as cloaking, and also includes things like distributing malware.”

This behaviour poses a double threat to user safety and advertiser trust. Google’s pre-emptive response, powered by LLMs, is aimed at closing the gap between detection and action, while minimising consumer exposure.

Policing the publisher side

It wasn’t just advertisers under the microscope. In 2024, Google blocked or restricted ads from running on 1.3 billion publisher pages, and broader site-level action was taken on 220,000 websites.

Significantly, 97% of these actions were triggered by Google’s AI systems. This automation has allowed for quicker site reviews, ensuring ad placements are not just compliant but contextually safe — a growing concern in user-generated content environments like YouTube Shorts.

“We have a range of publisher policies that govern what content can monetise on our platform. We enforce these policies consistently, regardless of the publisher, and across multiple surfaces,” said Rodriguez.

For brand marketers, this provides a layer of assurance around adjacency risks—the concern that their ad might appear next to offensive or inappropriate content. Google’s systems are increasingly able to evaluate context in real time, reducing lag and increasing safety at scale.

Cultural context and local nuance

One of the quieter challenges in global ad safety is localisation. A banned ad in one market may be perfectly acceptable in another, and cultural sensitivities are constantly evolving. Deciding on this requires a mix of machine precision and human judgment.

“Our approach to localising content safety frameworks involves a dynamic interplay of adhering to local laws, understanding and adapting to societal norms and nuances, and collaborating with local stakeholders,” said Rodriguez.

To do this, Google has built teams of policy and enforcement specialists across geographies. “We make sure that our teams are representative of the regions that we are serving,” he added.

This isn’t just about legal compliance; it’s about reputational safety for brands operating across borders.

India’s scale and complexity, for instance, make it a critical testbed for Google's ad safety protocols. In 2024, 247.4 million ads were removed and 2.9 million advertiser accounts were suspended from the country. Financial services, trademark abuse, network circumvention, personalised ads, and gambling content were the top five reasons for enforcement.

Rodriguez confirmed that India’s emerging risk areas, including fantasy gaming and real-money apps, are being monitored with increasing scrutiny. “The Advertiser Identity Verification programme now covers more than 200 countries and territories… We do have additional certification requirements for certain verticals that vary by country,” he noted.

While not confirming any imminent expansion of the programme’s criteria in India, Rodriguez indicated that dynamic risk assessment remains central to Google’s enforcement strategy. “These certifications are separate and in addition to Advertiser Identity Verification.”

What this means for agencies and advertisers

For creative agencies and brand marketers, Google’s evolving ad safety playbook offers both assurance and a challenge. On the one hand, it reduces exposure to reputational risks and ensures cleaner inventory. On the other, it requires closer compliance with increasingly granular policy frameworks and verification processes.

It also raises the bar for platform-native creativity. As Rodriguez pointed out, the future of ad safety isn’t just about blocking the bad—it’s about fast-tracking the good. “Our latest models… distinguish legitimate businesses from scams for precise enforcement at scale.”

The takeaway from the company’s 2024 report is simple. In an AI-governed, fraud-resistant ecosystem, the advantage will lie with advertisers who understand both the letter and the spirit of the rules; and who can create content that works within them.

Source:
Campaign India