AI-Generated Disinformation: A Threat to Democratic Elections

AI-Generated Disinformation: A Threat to Democratic Elections
AI-Generated Disinformation: A Threat to Democratic Elections

In an increasingly digitized world, the power and influence of artificial intelligence (AI) have permeated every aspect of our lives. From how we interact with each other to how we conduct business, AI has undeniably taken center stage. However, as with every technological advancement, some potential risks and challenges must be addressed. One such challenge is the threat of AI-generated disinformation, especially in the context of democratic elections.

In the era of advanced artificial intelligence, AI-generated disinformation looms over democratic societies.

The European Union's Stance on AI-Generated Disinformation

The European Union (EU) has recently expressed concerns over AI-generated disinformation's potential risks. Vera Jourova, the bloc's values and transparency commissioner, has warned that more measures must be taken to counteract these threats, especially in light of the upcoming European Parliament elections. She highlighted the need for platforms to implement safeguards informing users about the synthetic origin of online content.

The Role of OpenAI and ChatGPT

OpenAI, the maker of ChatGPT, is one of the leading organizations in artificial intelligence. The EU commissioner has planned a meeting with representatives from OpenAI to discuss these issues. The AI giant is not a signatory to the bloc's anti-disinformation Code, so it may face pressure to join the effort.

AI-Generated Images and Elections

The potential impact of AI-generated images on elections is a concern that cannot be ignored. The commissioner's remarks follow the initial pressure applied to platforms this summer, where she urged signatories to label deepfakes and other AI-generated content. She called on Code signatories to create a dedicated track to tackle AI production, stating that machines should not have free speech.

An incoming pan-EU AI regulation, or the EU AI Act, is expected to enforce user disclosures as a legal requirement for makers of generative AI technologies like AI chatbots. However, the law, still in the draft stage, will not apply for a few years. Therefore, the Commission relies on the Code to act as a stop-gap measure, encouraging signatories to disclose deepfake content proactively.

Efforts by Major Tech Companies

Major tech companies like Google, Meta, Microsoft, and TikTok have published reports discussing their efforts in addressing the risks associated with AI-generated content. They have highlighted their commitment to developing technology responsibly and outlined their approach to maintaining high information quality standards. Google, for instance, has announced that it will soon integrate innovations in watermarking, metadata, and other techniques into its latest generative models.

Microsoft, a significant investor in OpenAI, has also incorporated generative AI capabilities into its search engine, Bing. The company is taking a whole-company approach to ensure the responsible implementation of AI. On the other hand, TikTok has modified its synthetic media policy to address the use of content created or modified by AI technology on its platform.

The Threat of Kremlin Propaganda

One of the critical concerns raised by the EU is the spread of Kremlin propaganda, especially given the upcoming EU elections. The Russian state has been accused of using disinformation as a weapon of mass manipulation, both internally and internationally. The commissioner has urged platform signatories to be vigilant and adjust their actions to reflect the ongoing war in the information space.

Future Measures

In light of the Digital Services Act (DSA), the EU expects all signatories to take their responsibilities seriously in mitigating the risks posed by elections. It is now mandatory for all very large online platforms (VLOPs) to comply with the DSA, which includes transforming the Code of Practice into a Code of Conduct. This Code can be part of a co-regulatory framework for dealing with disinformation risks. 

As AI plays a more significant role in our lives, we must be mindful of the potential challenges and risks, particularly in democratic processes. We can minimize these risks through constant vigilance, regulation, and effective countermeasures, ensuring that the information landscape is fair and balanced. 

To emphasize the threat of AI-generated disinformation, an image of a democratic election ballot with a deepfake alert symbol superimposed on it would be helpful.

Share the Article by the Short Url:

Michael Terry

Michael Terry

Greetings, esteemed individuals. I would like to take this opportunity to formally introduce myself as Michael O Terry, an expert in the field of artificial intelligence. My area of specialization revolves around comprehending the impact of artificial intelligence on human beings, analyzing its potential for development, and forecasting what the future holds for us. It is my pleasure to be of service and share my knowledge and insights with you all.