Tech giant Google on Wednesday said it had blocked more than 5.1 billion “bad” (policy-violating) ads in 2024, using advanced AI capabilities, including large language models (LLMs) such as Gemini. The company added that it had also restricted over 9.1 billion ads to strengthen its enforcement and protection efforts.
The findings were published in its 2024 Ads Safety Report, which outlines Google’s work to prevent the malicious use of its advertising platforms. Advanced AI has enabled significant progress in our ability to combat bad ads and bad actors across the ecosystem. Our policies and enforcement capabilities make the web safer for people, stronger for businesses, and more successful for publishers said Alex Rodriguez, General Manager, Ads Safety.
Google said LLMs have not only improved policy enforcement but also enhanced the ability to proactively prevent abuse. “These efforts kept billions of policy-violating ads from ever showing to a consumer, while simultaneously ensuring that legitimate businesses can show ads to potential customers quickly; the report noted.
In 2024, the company blocked over 793 million ads for abusing the ad network”, 503 million for trademark violations, 491 million for breaching policies on personalised ads, and 280 million due to legal requirements. Other commonly enforced policies included those related to misrepresentation, gambling, adult content, personalised ads and counterfeit goods.
One notable trend highlighted by Google was the rise of public figure impersonation ads, where bad actors used AI-generated imagery or audio to falsely suggest celebrity endorsements and promote scams. To fight back, we quickly assembled a dedicated team of over 100 experts to analyse these scams and develop effective countermeasures, such as updating our Misrepresentation policy to suspend advertisers that promote these scams, the company said. It also took action against 1.3 billion publisher pages for policy violations.
The 9.1 billion restricted ads were subject to limited promotion due to content that was deemed & legally or culturally sensitive, the company said, often involving topics such as financial services, adult content, and healthcare. These promotions may not show to every user in every location, and advertisers may need to meet additional requirements before their ads are eligible to run, the report stated. Google also suspended more than 39.2 million advertiser accounts over the course of the year.
To safeguard election integrity, Rodriguez said Google expanded its ad disclosure rules in 2024, becoming the first major platform to mandate labels on AI-generated political ads. The company also ramped up enforcement against false claims, ensured that all election ads included paid for by disclaimers, and blocked more than 11 million ads from unverified accounts. Nearly 9,000 new political advertisers were verified during the year.
Google further noted that it deployed tools such as User Identity Verification in over 200 countries and territories to help prevent bad actors from returning to the platform. More than 90% of the ads seen by users on Google now come from verified advertisers, the company said. It made over 30 updates to its ads and publisher policies over the past year.