In a move to combat potential misuse of artificial intelligence (AI) during elections, Google announced an update to its political content policy. Starting mid-November 2023, verified election advertisers will be required to disclose whenever their ads contain synthetic media that depicts real or realistic-looking people or events.
This policy change comes amid growing concerns about the potential for AI-generated deepfakes to manipulate voters. Deepfakes are videos or audio recordings that have been altered using AI to make it appear as if someone is saying or doing something they never did.
Under the new policy, advertisers will be required to select a checkbox during campaign setup that clearly indicates the use of “altered or synthetic content.” Google will then automatically generate a disclosure label within the ad itself. This label will be displayed prominently, ensuring users are aware of the content’s artificial nature.
“Ads that contain synthetic content altered or generated in such a way that is inconsequential to the claims made in the ad will be exempt from these disclosure requirements. This includes editing techniques such as image resizing, cropping, color or brightening corrections, defect correction (for example, “red eye” removal), or background edits that do not create realistic depictions of actual events,” reads the Google ad policy page.
The disclosure requirement applies to all formats, including images, videos, and audio content used in election ads. This move aims to encompass the full spectrum of potential manipulation techniques that AI can enable.
While some may argue that disclosure might lessen the impact of the ad, Google believes transparency is crucial. “Voters deserve to know when they’re being exposed to manipulated content,” the spokesperson added.
This policy update is part of Google’s broader efforts to combat misinformation, especially during elections. The company has also implemented measures to fact-check political ads and promote credible news sources.