Just under 45% of organizations conduct regular audits and assessments to ensure their cloud environment is secure, which is a “concern” as more applications and workloads move to multi-cloud platforms.
When asked how they monitor risks across their cloud infrastructure, 47.7% of companies cited automated security tools while 46.5% relied on native security offerings from service providers. Another 44.7% said they conducted regular audits and assessments, according to a report from security firm Bitdefender.
Also: Artificial Intelligence is changing cybersecurity and companies must take notice of the threat
The study, which included more than 1,200 IT and security professionals, including chief information security officers in six markets: Singapore, the United Kingdom, France, Germany, Italy and the United States, revealed that about 42.1% of them worked with external experts.
Paul Hadji, Bitdefender's vice president for Asia Pacific and cybersecurity services, said in response to ZDNET's questions that it is “certainly concerning” that only 45% of companies conduct regular audits of their cloud environments.
Haji noted that overreliance on cloud providers' ability to protect hosted services or data continues even as companies continue to move applications and workloads to multi-cloud environments.
“Most of the time,[cloud providers]are not as responsible as you might think, and in most cases, the data stored in the cloud is often large and sensitive,” Haji said.
“Responsibility for cloud security, including how data is protected at rest or in motion, the identities (of people), servers and endpoints granted access to resources, and compliance rests mostly with the customer. It is important to first establish a baseline identifying current risks and vulnerabilities in the environments.” Your cloud based on things like geography, industry, and supply chain partners.”
Among the top security concerns respondents had in managing their company's cloud environments, 38.7% cited identity and access management while 38% cited the need to maintain cloud compliance. The study found that another 35.9% described shadow IT as a concern, while 32% were concerned about human error.
However, when it comes to AI-related generative threats, participants seem confident in their teammates' ability to identify potential attacks. A majority of 74.1% believed that colleagues in their department would be able to detect a fake video or audio attack, with US respondents showing the highest level of confidence at 85.5%.
Also: Program faster with generative AI, but beware of the risks when doing so
In comparison, only 48.5% of their counterparts in Singapore were confident that their teammates could spot deepfakes – the lowest percentage among the six markets. In fact, 35% in Singapore said colleagues in their department would not be able to identify a deepfake, the highest rate in the global group to say the same.
Was the global average of 74.1% who were confident in their teammates' ability to spot deepfakes misplaced or misplaced?
Haji noted that this confidence was expressed even though 96.6% viewed GenAI as a minor to very major threat. The basic explanation for this, he said, is that IT and security professionals don't necessarily trust the ability of users outside their teams — who aren't in IT or security — to spot deepfakes.
“That is why we believe that technology and processes (implemented) together are the best way to mitigate these risks,” he added.
In response to a question about how effective or accurate current tools are in detecting AI-generated content such as deepfakes, he said this will depend on several factors. He explained that if it is delivered via a phishing email or included in a text message containing a malicious link, the deepfake should be quickly identified by endpoint protection tools, such as XDR (Extensible Detection and Response) tools.
However, he noted, threat actors rely on natural human tendencies to believe what they see and what people they trust, such as celebrities and high-profile figures — whose images are often manipulated to communicate messages — endorse.
Also: 3 ways to accelerate and improve your generative AI implementation
As deepfake technologies continue to develop, he said it would be “almost impossible” to detect such content by sight or sound alone. He stressed the need to develop technology and processes that can detect deepfakes as well.
Although respondents in Singapore were the most skeptical of their teammates' ability to detect deepfakes, he noted that 48.5% is a high number.
Haji once again urged the importance of having both technology and processes in place, saying: “Deepfakes will continue to improve, and detecting them effectively will require sustained efforts that bring together people, technology and processes that work together. In cybersecurity, there is no ‘silver bullet’ – it is always a strategy.” “Multi-layered, starting with strong prevention of closing the door before the threat enters.”
Training is also becoming increasingly important as more employees work in hybrid environments and more risks arise from homes. “Companies need to take clear steps to verify the authenticity of deepfakes and protect against highly targeted phishing campaigns,” he said. “Processes are key for organizations to help ensure double verification procedures are in place, especially in cases involving the transfer of large amounts of money.”
According to the Bitdefender study, 36.1% see GenAI as a very high threat in terms of manipulation or creating deceptive content, such as deepfakes. 45.1% described this as a moderate threat, while 15.4% said it was a minor threat.
Also: Nearly 50% of people want AI clones to do it for them
The vast majority, 94.3%, were confident in their organizations' ability to respond to current security threats, such as ransomware, phishing, and zero-day attacks.
However, the study revealed that 57% admitted they had experienced a data breach or leak in the past year, an increase of 6% on the previous year. This figure was lowest in Singapore at 33% and highest in the United Kingdom at 73.5%.
Phishing and social engineering were the top concern at 38.5%, followed by ransomware, insider threats, and software vulnerabilities at 33.5% each.