Google Cloud addresses growing concerns about artificial intelligence (AI) security with a secure AI framework built on its internal security practices that provides companies with tools and guidance to manage the evolving risks of AI deployments.
In a recent interview with Computer Weekly in Singapore, Phil Venables, Google Cloud's chief information security officer, highlighted the framework's focus on three key areas: software lifecycle risk, data governance, and operational risk.
Google Cloud's approach to AI security stems from its unique position of owning and operating the entire AI stack, from hardware and infrastructure to models and data, allowing it to build security measures from the ground up, Venables said.
“But we realize that it is not enough to have a strong core infrastructure for our AI security,” Venables said. “We have to enable customers to safely and securely manage AI in their environments.”
Google Cloud's Secure AI Framework addresses software lifecycle risks by providing tools within Vertex AI to manage the software development process for AI workloads. This includes managing model weights and parameters, providing an advantage over other offerings that require separate tools and processes.
Data management, another critical area, is addressed through features that allow customers to trace data lineage, ensure data integrity, and maintain clear separation between their data and underlying Google Forms. This prevents data leakage and helps organizations, even those lacking data management experience, manage their AI data effectively.
Operational risks, which arise once an AI system is deployed, are mitigated through features like Model Armor. This capability allows users to implement input and output filters, control the flow of data to and from the AI model and prevent malicious attacks such as hotshot injection, where attackers manipulate input claims to force the model into unintended behavior.
The framework is also being updated to keep up with risks such as data poisoning, which manipulates training data to corrupt model behavior and produce biased or harmful output, Venables said. Google Cloud provides guidance on data integrity management and implements filters to prevent such attacks.
Addressing the issue of customer maturity in adopting these security measures, Venables noted that many organizations are still transitioning from AI prototypes to full production and are finding the secure AI framework useful for establishing risk management and governance processes.
Specifically, he cited the integrated nature of the framework within Vertex AI as a key differentiator, providing ready-to-use security controls and alleviating the need for companies to assemble their own security controls from scratch.
The development teams behind Google's Gemini core model suite also adhere to the Secure AI framework, Venables said, underscoring the company's commitment to internal certification before external release.
Meanwhile, Google Cloud is promoting industry-wide collaboration by open sourcing the Secure AI framework, co-founding the Alliance for Secure AI, and encouraging other technology companies to contribute and build on these best practices.
Venables also addressed the regulatory challenges of AI security, noting that liability depends on the specific context and industry regulations. While highly regulated industries may involve multiple teams, including risk management, compliance, and legal, the security team often takes the lead. He noted the trend of chief information security officers evolving into chief digital risk officers, reflecting the broader scope of risk management required in the age of artificial intelligence.
Regarding the challenge of understanding AI risks, Venables acknowledged that many security teams are still learning, adding that Google Cloud supports customers by providing tools, training, workshops, and access to expert teams. It is also developing “data cards” and “model cards,” similar to software bills of materials, to provide transparency into the components and data used in AI models.