Navigating the Challenge: Generative AI and the Acceleration of Disinformation

Navigating the Challenge: Generative AI and the Acceleration of Disinformation
Navigating the Challenge: Generative AI and the Acceleration of Disinformation

"In an era of rapid technological advancement, the line between reality and fabrication is becoming increasingly blurred. Generative AI is accelerating disinformation at an unprecedented scale. How do we navigate this challenge?" - Michael O Terry

The Rising Challenge of Disinformation

Public awareness of disinformation is on the rise. Recent studies show that most American adults are cautious about their news sources, with many fact-checking their news and expressing a desire to limit the spread of false information. However, the advent of generative AI tools is making it increasingly difficult to control the proliferation of disinformation.

This was the critical insight from the disinformation and AI panel at TechCrunch Disrupt 2023. Panelists included Sarah Brandt, EVP of partnerships at NewsGuard, and Andy Parsons, senior director of the Content Authenticity Initiative (CAI) at Adobe. They discussed the growing threat of AI-generated disinformation, particularly in the context of upcoming elections, and pondered potential countermeasures.

Parsons highlighted the gravity of the situation by stating that, without a shared objective truth, democracy is at risk. Brandt and Parsons acknowledged that disinformation, whether AI-assisted or not, is far from a new phenomenon. However, they noted that generative AI has made it significantly more accessible and cheaper to create and disseminate disinformation on a large scale.

Generative AI: A Double-Edged Sword

Generative AI's potential for misuse was illustrated by citing statistics from NewsGuard, a company that rates the reliability of news and information websites. The company identified dozens of sites that appeared to be almost entirely generated by AI tools. Since then, hundreds more such websites have been spotted. This highlights how generative AI has become a tool for mass-producing and distributing disinformation, often to generate ad revenue or spread misinformation.

Generative AI's potential for misuse is also evident in the evolution of OpenAI's text-generating models. According to a study by NewsGuard, GPT-4, the latest model, is more prone to spreading misinformation than its predecessor, GPT-3.5. The study revealed that GPT-4 was more effective at promoting false narratives in various formats, from news articles to TV scripts.

The Road to Solutions

Addressing the challenge posed by generative AI is challenging, and the panelists discussed several potential solutions. Adobe, which has a suite of fertile AI products known as Firefly, implements safeguards such as filters to prevent misuse. The company also co-founded the Content Authenticity Initiative, which promotes an industry standard for provenance metadata. However, using these standards is voluntary and guarantees that others will follow suit or that these safeguards cannot be circumvented.

Watermarking was suggested as another potential solution. Several organizations, including DeepMind, are exploring watermarking techniques for synthetic media. DeepMind has proposed a standard, SynthID, to mark AI-generated images in a way that is invisible to the human eye but can be detected by a specialized detector. Other companies, such as Imatag and Steg.AI, offer similar watermarking tools that can withstand image modifications.

Brandt expressed optimism that the economic incentives would encourage companies developing generative AI tools to be more thoughtful about their usage and design to prevent misuse. After all, the trustworthiness of their content is vital for their business. However, the availability of competent, safeguard-free, open-source generative AI models raises doubts about the effectiveness of these measures. The battle against disinformation in the age of AI continues, and only time will tell what the outcome will be.

Share the Article by the Short Url:

Michael Terry

Michael Terry

Greetings, esteemed individuals. I would like to take this opportunity to formally introduce myself as Michael O Terry, an expert in the field of artificial intelligence. My area of specialization revolves around comprehending the impact of artificial intelligence on human beings, analyzing its potential for development, and forecasting what the future holds for us. It is my pleasure to be of service and share my knowledge and insights with you all.