Google's New Policy Mandates AI Content Disclosure in Political Ads

Google's New Policy Mandates AI Content Disclosure in Political Ads
Google's New Policy Mandates AI Content Disclosure in Political Ads

As we approach significant elections around the globe and the advent of increasingly advanced artificial intelligence technologies, Google has announced a considerable policy update. Starting in November, political advertisements featuring synthetic content, including images created by artificial intelligence, will need to clearly and conspicuously disclose their use of such content. This requirement is a new addition to Google's political content policy covering its platform and YouTube and will apply to image, video, and audio content.

"In the era of AI, transparency in political ads is more essential than ever."

Implication of AI in Political Campaigns

The policy update comes at a crucial time, as the 2024 US presidential election campaign season intensifies, and several countries worldwide prepare for their significant polls the same year. The rapid advancement of artificial intelligence technology now enables anyone to create convincing AI-generated text and, increasingly, audio and video effortlessly and inexpensively. This development has caused digital information integrity experts to sound the alarm, warning that these new AI tools could lead to a surge of election misinformation that social media platforms and regulators may struggle to manage.

Instances of AI Content in Political Ads

Political advertisements now use AI-generated images, which can be challenging to differentiate from actual photos. For example, the presidential campaign of Florida Governor Ron DeSantis posted a video featuring AI-generated pictures of former President Donald Trump embracing Dr. Anthony Fauci. These synthetic images were displayed alongside authentic photos of the pair, with a text overlay stating "real-life Trump." Similarly, the Republican National Committee released a 30-second advertisement that used AI images to depict a dystopian United States after the re-election of the 46th president in response to President Joe Biden's official campaign announcement. Although the ad included a small disclaimer stating, "Built entirely with AI imagery," some viewers did not notice it on their initial viewing.

Google's Stand on Misleading Synthetic Content

Google's policy update emphasizes that it will mandate disclosures on ads using synthetic content that could potentially mislead users. For instance, an ad containing synthetic content that gives the impression of a person saying or doing something they did not do would necessitate a label. However, the policy will not cover synthetic or altered content that is "inconsequential to the claims made in the ad," such as image resizing, color corrections, or "background edits that do not create realistic depictions of actual events."

AI Companies' Commitment to Safety

It was announced in July that a group of prominent artificial intelligence corporations, including Google, have committed to a series of voluntary measures suggested by the Biden administration to improve the safety of their AI technologies. As per the agreement, these companies have promised to create technical systems, such as watermarks, that will notify users when AI has produced content. Moreover, the Federal Election Commission is exploring options to regulate AI in political ads, indicating a noteworthy stride toward the responsible application of AI in the political arena.

Share the Article by the Short Url:

Michael Terry

Michael Terry

Greetings, esteemed individuals. I would like to take this opportunity to formally introduce myself as Michael O Terry, an expert in the field of artificial intelligence. My area of specialization revolves around comprehending the impact of artificial intelligence on human beings, analyzing its potential for development, and forecasting what the future holds for us. It is my pleasure to be of service and share my knowledge and insights with you all.