The Cloud Security Alliance (CSA), an organization dedicated to defining standards, certifications, and best practices to help ensure a secure cloud computing environment, today released Artificial Intelligence (AI) Risk Management: Thinking Beyond Regulatory Boundaries. The document, drafted by CSA's AI Governance and Compliance Working Group, provides a comprehensive framework for auditing AI systems, addressing important aspects of AI technology and providing auditors with much-needed insights and tools to ensure the reliability and responsible innovation of intelligent systems.
“The proliferation of intelligent systems in today’s world requires auditors to be not only willing, but able to evaluate these systems beyond simply checking checkboxes,” said Ryan Gifford, research analyst at Cloud Security Alliance, and part of the leadership of the AI Governance and Compliance working group. a team. “While the need for rigorous, purposeful, results-based auditing is critical, trust in AI can only be achieved through a far-reaching approach to auditing that goes beyond what is required. We are hopeful that auditors can begin to address compliance Proactively and comprehensively, using the framework described in this document.
Written as a follow-up to AI Resilience: A Revolutionary Benchmarking Model for AI Safety, these guidelines are designed to be universally applicable across industries, focusing on privacy, security, and trustworthiness through a risk-based approach that emphasizes critical and investigative thinking. Curiosity and the auditor's ability to evaluate systems for unintended behavior.
Building on current best practices for AI auditing, the document takes an innovative approach as it covers the entire AI lifecycle – from development and deployment to monitoring and decommissioning – and includes sample questions to be covered during an audit or evaluation. The document is designed to provide basic knowledge about AI resilience, types of AI systems, and core concepts such as responsibility, accountability, and accountability, with detailed sections on AI governance, applicable laws and standards, and third-party vendor and infrastructure management. By incorporating a comprehensive approach to AI auditing, the guidelines aim to mitigate risks, enhance transparency, and ensure that AI systems are not only compliant, but truly trustworthy.
Download AI Risk Management: Thinking Beyond Organizational Boundaries.
The AI Governance and Compliance Working Group aspires to be an industry cornerstone for creating, advocating and disseminating AI governance and compliance standards. The committee aims to shape policy, influence legislation, and create standards that define the gold standard. Individuals interested in participating in future research and initiatives are invited to join the working group.