Google will soon require political ads to disclose when AI-generated images, videos and audio have been used.
From November, political adverts must clearly feature a disclaimer when “synthetic content” is used to depict “realistic-looking people or events”, reports Bloomsberg.
Why we care. Tackling fake news and enhancing online safety could boost people’s trust in the internet, which could ultimately give them more confidence to shop online.
How will it work? Political ads must feature labels to act as red flags when AI content has been used, such as:
- “This image does not depict real events.”
- “This video content was synthetically generated.”
- “This audio was computer generated.”
- “This image does not depict real events.”
Campaigns that use AI for “inconsequential” tweaks, such as small edits to photos like the removal of red eye, will not need to feature a disclaimer.
Why now? The new rules are coming into force one year ahead of the next US Presidential election. A Google spokesperson told the BBC that the move was in response to “the growing prevalence of tools that produce synthetic content”.
The news also comes one week after X (the platform formerly known as Twitter) announced that it is bringing back political ads ahead of the 2024 US election.
Get the daily newsletter search marketers rely on.
What has Google said? The search engine explains the consequences of not adhering to its rules in the Google political content policy:
- “Non-compliance with our political content policies may result in information about your account and political ads being disclosed publicly or to relevant government agencies and regulators.”
Deep dive. Read Google’s political content policy for more information on election ads.