
Artificial intelligence (AI) is one of the most transformative technologies of our time. It is already having a major impact on our lives, in areas such as healthcare, transportation, and entertainment. As AI continues to develop, it is likely to have an even greater impact on society.
However, with great power comes great responsibility. It is important to ensure that AI is developed and used in a safe and responsible manner. This means addressing the potential risks associated with AI, such as bias, misuse, and security vulnerabilities.
Google is one of the leading companies in the field of AI. It is also one of the companies that is taking the most seriously the issue of AI safety and security. In a recent blog post, Google announced the expansion of its commitment to secure AI. This includes three key initiatives:
- Expanding the Vulnerability Rewards Program (VRP) to reward for attack scenarios specific to generative AI. This will incentivize research around AI safety and security, and bring potential issues to light that will ultimately make AI safer for everyone.
- Introducing the Secure AI Framework (SAIF). SAIF is a set of principles and guidelines for building and deploying secure AI systems. It is designed to help organizations identify and mitigate risks, and to build trust in AI.
- Working with partners to deliver secure AI offerings. Google is working with companies like GitLab and Cohesity to develop new capabilities to help customers build secure AI systems.
In addition to these three initiatives, Google is also taking other steps to make AI safer, such as:
- Investing in research on AI safety and security. Google is funding research at universities and other institutions to develop new ways to make AI safer.
- Educating the public about AI safety and security. Google is working to educate the public about the potential risks and benefits of AI, and how to use AI safely.
- Collaborating with other organizations on AI safety and security. Google is working with other companies, governments, and non-profit organizations to develop and implement standards and best practices for AI safety and security.
Google’s commitment to safe AI is an important step towards making AI safer for everyone. By expanding its investment in research, education, and collaboration, Google is helping to build a more secure and trustworthy future for AI.
The Importance of AI Safety and Security
AI safety and security is important for a number of reasons. First, AI systems are becoming increasingly powerful and complex. This means that they have the potential to cause significant harm if they are not properly secured.
Second, AI systems are increasingly being used in critical applications, such as healthcare and transportation. This means that any security vulnerabilities in AI systems could have serious consequences.
Third, AI systems are becoming increasingly interconnected. This means that a security vulnerability in one AI system could be exploited to attack other AI systems.
The Challenges of Ensuring AI Safety and Security
There are a number of challenges to ensuring AI safety and security. One challenge is that AI systems are often complex and opaque. This makes it difficult to identify and mitigate security vulnerabilities.
Another challenge is that AI systems are constantly evolving. This means that new security vulnerabilities can emerge over time.
Finally, AI systems are increasingly being used in new and innovative ways. This means that it is difficult to anticipate all of the potential security risks associated with AI.
Cyberthreats evolve quickly and some of the biggest vulnerabilities aren’t discovered by companies or product manufacturers — but by outside security researchers. That’s why we have a long history of supporting collective security through our Vulnerability Rewards Program (VRP), Project Zero and in the field of Open Source software security. It’s also why we joined other leading AI companies at the White House earlier this year to commit to advancing the discovery of vulnerabilities in AI systems.
Today, we’re expanding our VRP to reward for attack scenarios specific to generative AI. We believe this will incentivize research around AI safety and security, and bring potential issues to light that will ultimately make AI safer for everyone. We’re also expanding our open source security work to make information about AI supply chain security universally discoverable and verifiable.
New technology requires new vulnerability reporting guidelines
As part of expanding VRP for AI, we’re taking a fresh look at how bugs should be categorized and reported. Generative AI raises new and different concerns than traditional digital security, such as the potential for unfair bias, model manipulation or misinterpretations of data (hallucinations). As we continue to integrate generative AI into more products and features, our Trust and Safety teams are leveraging decades of experience and taking a comprehensive approach to better anticipate and test for these potential risks. But we understand that outside security researchers can help us find, and address, novel vulnerabilities that will in turn make our generative AI products even safer and more secure. In August, we joined the White House and industry peers to enable thousands of third-party security researchers to find potential issues at DEF CON’s largest-ever public Generative AI Red Team event. Now, since we are expanding the bug bounty program and releasing additional guidelines for what we’d like security researchers to hunt, we’re sharing those guidelines so that anyone can see what’s “in scope.” We expect this will spur security researchers to submit more bugs and accelerate the goal of a safer and more secure generative AI.
Two new ways to strengthen the AI Supply Chain
We introduced our Secure AI Framework (SAIF) — to support the industry in creating trustworthy applications — and have encouraged implementation through AI red teaming. The first principle of SAIF is to ensure that the AI ecosystem has strong security foundations, and that means securing the critical supply chain components that enable machine learning (ML) against threats like model tampering, data poisoning, and the production of harmful content.
Today, to further protect against machine learning supply chain attacks, we’re expanding our open source security work and building upon our prior collaboration with the Open Source Security Foundation. The Google Open Source Security Team (GOSST) is leveraging SLSA and Sigstore to protect the overall integrity of AI supply chains. SLSA involves a set of standards and controls to improve resiliency in supply chains, while Sigstore helps verify that software in the supply chain is what it claims to be. To get started, today we announced the availability of the first prototypes for model signing with Sigstore and attestation verification with SLSA.
These are early steps toward ensuring the safe and secure development of generative AI — and we know the work is just getting started. Our hope is that by incentivizing more security research while applying supply chain security to AI, we’ll spark even more collaboration with the open source security community and others in industry, and ultimately help make AI safer for everyone.
Conclusion
Despite the challenges, it is important to ensure that AI is developed and used in a safe and responsible manner. Google’s commitment to safe AI is an important step in this direction. By expanding its investment in research, education, and collaboration, Google is helping to build a more secure and trustworthy future for AI.
Beyond Google’s Efforts
Google is not the only company that is taking steps to make AI safer. Other companies, such as Microsoft, Facebook, and Amazon, are also investing in AI safety and security research. In addition, there are a number of non-profit organizations and academic institutions that are working to make AI safer.
These efforts are essential to ensuring that AI is developed and used in a way that benefits everyone. By working together, we can build a more secure and trustworthy future for AI.