Google Expands Bug Bounty Program to Include Generative AI Threats

Summary

Google's expansion of its bug bounty program to include generative AI threats incentivizes research around AI safety and security but also aids in identifying potential issues. The move is aimed at making AI safer and more secure for everyone.

Google Expands Bug Bounty Program to Include Generative AI Threats
Google Expands Bug Bounty Program to Include Generative AI Threats

Photo from Google

Google, the tech titan, has significantly expanded its vulnerability rewards program (VRP) to include the potential threats related to generative AI. The decision to broaden the VRP was made to encourage research around AI safety and security and to shed light on potential issues, thereby making AI safer for everyone.

Google's VRP, the bug bounty program, rewards ethical hackers who responsibly disclose security flaws. The program has been instrumental in identifying and addressing numerous security vulnerabilities over the years. However, with the advent of generative AI, new security concerns have arisen, prompting Google to reconsider how it categorizes and reports bugs.

The Role of Google's AI Red Team

Google has enlisted the help of its newly formed AI Red Team in this endeavor. The AI Red Team is a group of hackers that simulate a variety of adversaries, ranging from nation-states and government-backed groups to hacktivists and malicious insiders. Their goal is to hunt down security weaknesses in technology. Recently, the team conducted an exercise to identify the most significant threats to the technology behind generative AI products such as ChatGPT and Google Bard.

The team's findings indicated that large language models (LLMs) are susceptible to prompt injection attacks. In such attacks, a hacker crafts adversarial prompts that can affect the model's behavior. An attacker could use this attack to generate harmful or offensive text or leak sensitive information.

This highlights that the scope of cybersecurity threats is continually evolving and adapting to new technologies, and protection measures must grow simultaneously.

Understanding the Threats

Apart from prompt injection attacks, the team also warned against another type of attack called training-data extraction. This attack allows hackers to reconstruct verbatim training examples to extract personally identifiable information or passwords from the data. These two types of attacks, along with model manipulation and model theft attacks, are covered in the scope of Google's expanded VRP.

However, Google has specified that it will not reward researchers who uncover bugs related to copyright issues or data extraction that reconstructs non-sensitive or public information. The monetary rewards will depend on the severity of the vulnerability discovered. For instance, researchers who find command injection attacks and deserialization bugs in susceptible applications, such as Google Search or Google Play, can earn up to $31,337. In contrast, if the flaws affect lower priority apps, the maximum reward is $5,000.

Google's Commitment to Security

In 2022, Google paid more than $12 million in rewards to security researchers. This staggering amount underscores Google's commitment to ensuring the safety and security of its products and services. By expanding the VRP to include generative AI threats, Google is again demonstrating its dedication to staying ahead of potential vulnerabilities and threats in the rapidly evolving landscape of artificial intelligence.

Share the Article by the Short Url:

Rob Wang

Rob Wang

Greetings, I am Rob Wang, a seasoned digital security professional. I humbly request your expert guidance on implementing effective measures to safeguard both sites and networks against potential external attacks. It would be my utmost pleasure if you could kindly join me in this thread and share your invaluable insights. Thank you in advance.