Artificial intelligence appears poised to revolutionize cybersecurity, with changes already happening on the ground — and in the cloud.
In a recent survey by the Cloud Security Alliance (CSA) and Google Cloud, 67% of IT and security professionals said they have started testing generative AI (GenAI) capabilities for security use cases, while another 27% said they are in the planning stage. Only 6% of respondents said they have no current plans to explore AI for security.
Experts say AI will increasingly augment cybersecurity operations, providing guidance and assistance to human practitioners to help them make better, more informed decisions. “This is especially important in the cloud because the cloud is complex, dynamic and constantly changing,” said Charlie Winkles, an analyst at Gartner. “Staying on top of all of that is a problem.”
It's a problem that AI and machine learning are expected to help solve, with natural language queries and responses already a “standard staple” in cloud security tools, according to Andras Cir, an analyst at Forrester.
The ability to ask a large language model (LLM) a question and receive a direct answer—based on massive amounts of complex technical data that AI models can quickly process—is a potential game changer. Instead of sifting through the data themselves, practitioners could theoretically validate their decisions and strengthen an organization’s security posture much faster and easier.
“Instead of having to really dig deep and understand the details, we can ask questions in natural language to more effectively sort through the noise of these tools and understand what's really going on,” Winkles said.
Charlie Winkles, Analyst, Gartner
Caleb Sima, head of CSA’s AI Safety Initiative, predicts that AI will eventually autonomously build and oversee cloud infrastructure and pipelines, automatically integrating sophisticated security controls to reduce the attack surface. In the near term, he added, AI-powered tools are already simplifying the role of the cloud engineer by alleviating long-standing cloud security pain points.
3 Key Use Cases for Cloud AI Security
Key cloud security use cases for GenAI, according to experts, include the following.
1. Detect and fix misconfiguration
Cloud configuration errors pose one of the most serious security risks facing businesses, according to the Cybersecurity Authority (CSA), the National Security Agency, the European Union, and others.
In complex cloud environments, configuration and permission errors are common, opening the door to cyberattacks and sensitive data exposure. “Ultimately, configuration errors are the cause of a range of security breaches,” said Sima.
Identifying every single error in a cloud configuration and manually troubleshooting it is time-consuming and tedious, if not impossible. AI tools can automatically analyze infrastructure and systems to detect anomalies and misconfigurations and then fix them. “They can automate repair faster and more efficiently than humans,” Sima added.
But today, AI tools are more likely to suggest policy or configuration changes to human operators, who then approve or reject them, Winkles said. And while GenAI may be able to address vulnerabilities autonomously without human intervention, it’s rare for security software to allow it to do so in real-world cloud environments.
“Most organizations still aren’t willing to automate changes in development and production,” Winkles said. “That has to change at some point, but it’s a matter of trust. It’s going to take years.” For the foreseeable future, he added, human oversight and validation of AI will remain important and desirable.
2. User behavior analysis
Cser said he expects to see GenAI improve detection capabilities in the cloud security space, with the technology able to process massive data sets and identify unusual access patterns that human operators miss.
“AI will be able to guide security teams into user behavior by putting activities in the broader context of cloud computing environments,” Sima agreed. He added that AI algorithms will become more capable of recognizing anomalous behavior and alerting teams to potential security incidents, based on factors such as:
User roles. Access privileges. Device properties. Network traffic patterns.
Ultimately, Sima predicted that AI will not only be able to accurately predict current user behavior, but also future behavioral trends. “When we take this into account, we will see AI used to create adaptive security policies and controls and assign risk scores to individual behaviors,” he said.
3. Detect and respond to threats
Experts also expect GenAI to help security teams identify malware and other active cyber threats faster and more accurately than human practitioners can do on their own by analyzing the environment in real time and comparing it with threat intelligence data.
According to Cser, GenAI-powered investigative assistants are already assisting security teams’ threat response efforts, by recommending proactive measures based on activity patterns.
Intelligent Cloud Security Threats
Advances in AI technology will also change the threat landscape, with AI-based attacks becoming inevitably more sophisticated, experts say. “Threat actors will be able to leverage AI algorithms to launch highly adaptive attacks and evasion techniques,” Sima said.
This may be cause for greater concern, but research suggests that the vast majority of organizations are moving quickly to invest in defensive AI capabilities. “We can assume that companies are already anticipating how to best use AI to stay one step ahead of threat actors,” Sima said. However, he added that organizations will need to continually prioritize AI security investments in the future if they want to gain and maintain the upper hand.
In other words, the endless game that defenders and attackers have long been engaged in seems likely to continue — albeit powered by AI and machine learning.
Getting Started with AI-Powered Cloud Security
Many cloud security providers are building GenAI capabilities directly into their existing tools and platforms. That means all but the largest organizations don’t need to worry about building their own AI models, and they shouldn’t, according to Winckless.
But just because a provider is rolling out GenAI capabilities doesn’t mean they’re infallible, or even necessarily ready for use. For example, users may face challenges like AI hallucinations, where the LLM program generates false information, which could be disastrous in the cybersecurity space.
“Look at the frameworks your generative AI provider uses and whether they offer any authentication or verification of inputs and outputs,” Winkles advised. “This is still a nascent field. It’s very exciting, but it’s also hard to say how well the technology is being used.”
Alyssa Erie is a senior editor at TechTarget Security.