Cloud Security Alliance report explores AI potential in 'offensive security'
A new research paper examines how advanced AI can help conduct adversarial testing with red/black teams and provides recommendations for organizations to do so.
The paper “Using AI for Offensive Security,” published by the Cloud Security Alliance (CSA) on August 6, addresses the integration of AI into three offensive cybersecurity approaches:
Vulnerability assessment: It can be used for automated identification of vulnerabilities using scanners. Penetration testing: It can be used to simulate cyberattacks in order to identify and exploit vulnerabilities. Red Team: Can be used to simulate a complex, multi-stage attack by a specific adversary, often to test an organization's ability to detect and respond.
Related practices are shown in this graphic:
CSA notes that actual practices can vary based on various factors such as organizational maturity and risk tolerance.
The primary focus of the paper is the transformation in cybersecurity caused by advanced AI such as large language models (LLMs) that support generative AI.
“This shift redefines AI from a narrow use case to a powerful, general-purpose technology,” says the report, which details current security challenges and showcases AI capabilities across five security phases:
Reconnaissance – Reconnaissance represents the initial phase in any offensive security strategy, with the goal of gathering broad data regarding the target's systems, networks, and organizational structure. Scanning – Scanning entails systematically scanning identified systems to reveal important details such as live hosts, open ports, running services, and technologies used, for example, by fingerprinting to identify vulnerabilities. Vulnerability Analysis – Vulnerability analysis identifies and prioritizes potential security vulnerabilities within systems, software, network configurations, and applications. Exploitation – Exploitation involves actively exploiting identified vulnerabilities to gain unauthorized access or escalate privileges within a system. Reporting – The reporting phase concludes the offensive security engagement by systematically compiling all findings into a detailed report.
“By adopting these AI use cases, security teams and their organizations can significantly enhance their defense capabilities and secure a competitive advantage in cybersecurity,” the newspaper said.
The paper addresses the current challenges and limitations of offensive security, such as expanding attack surfaces, advanced threats, etc., and delves deeply into LLMs and advanced AI in the form of autonomous agents.
“The agent begins by dividing the user request into executable and prioritized plans (planning). It then makes reasoning with the available information to choose the appropriate tools or next steps (reasoning). The LLM cannot execute the tools, but attached systems execute the tool accordingly (execution) and collect the tool's output The LLM then interprets the tool’s output (the analysis) to determine the next steps used to update the plan. This iterative process allows the agent to continue working periodically until the request is resolved,” the paper says. This is shown in this drawing: