The largest and most influential AI companies are collaborating to chart a security-first approach to the development and use of generative AI.
The Coalition for Safe AI, also known as CoSAI, aims to provide tools to mitigate the risks associated with AI. The goal is to create standardized safety barriers, security technologies, and tools for developing secure models.
“Our initial areas of work include artificial intelligence, software supply chain security, and preparing defenders for a changing cyber landscape,” CoSAI said in a statement.
the Initial efforts These efforts include creating a safety bubble and systems of checks and balances around access to and use of AI, and creating a framework to protect AI models from cyberattacks, according to Google, a founding member of the coalition. Google, OpenAI and Anthropic own the most widely used large language models (LLMs). Other members include infrastructure providers Microsoft, IBM, Intel, Nvidia and PayPal.
“AI developers need—and end users deserve—an AI security framework that meets the moment and responsibly exploits the opportunity before us. CoSAI is the next step in this journey, and we can expect more updates in the months ahead,” wrote Heather Adkins, Google’s VP of Security Engineering, and Phil Venables, Google Cloud’s chief information security officer.
AI safety as a priority
The safety of AI has raised a range of cybersecurity concerns since ChatGPT launched in 2022. These concerns include the misuse of social engineering to hack systems and the creation of fake videos to spread misinformation. Meanwhile, security companies like Trend Micro and CrowdStrike are now turning to AI to help businesses root out threats.
The safety, trust, and transparency of AI are important because the results can lead organizations to make wrong — and sometimes harmful — actions and decisions, says Gartner analyst Aviva Litan.
“AI cannot operate on its own without safeguards to control it — errors and exceptions must be highlighted and investigated,” says Litan.
AI security issues can be compounded by technologies like AI agents, which are add-ons that generate more accurate answers from personalized data.
“The right tools should be in place to handle all but the most obscure exceptions automatically,” says Litan.
US President Joe Biden has urged the private sector to prioritize the safety and ethics of artificial intelligence. His concern is that AI could spread inequality and compromise national security.
In July 2023, President Biden issued an executive order requiring commitments from major companies now part of CoSAI to develop safety standards, share safety test results, and prevent AI misuse of biological materials, fraud, and deception.
CoSAI will work with other organizations, including the Frontier Model Forum, the AI Partnership, OpenSSF, and MLCommons, to develop common standards and best practices.
MLCommons told Dark Reading this week that it will release a set of AI safety standards this fall that will grade law programs on responses to hate speech, exploitation, child abuse, and sex crimes.
CoSAI will be managed by OASIS Open, which, like the Linux Foundation, manages open source software development projects. OASIS is best known for its work on the XML standard and the ODF file format, an alternative to Microsoft Word's .doc file format.