The AI boom is amplifying risks across enterprise data estates and cloud environments, according to cybersecurity expert Liat Haiun.
In an interview with TechRepublic, Hayun, vice president of product management and cloud security research at Tenable, advised organizations to prioritize understanding their risk exposure and tolerance, while prioritizing addressing key issues like cloud misconfigurations and protecting sensitive data.
She noted that while companies remain cautious, access to artificial intelligence exacerbates some risks. However, she explained, today's CISOs are evolving into business enablers — and AI could ultimately serve as a powerful tool for enhancing security.
How artificial intelligence affects cybersecurity and data storage
TechRepublic: What's changing in the cybersecurity environment because of AI?
Liat: First and foremost, AI is becoming more accessible to enterprises. If you look back 10 years ago, the only organizations creating AI had to have a data science team with a PhD in data science and statistics to be able to create machine learning and AI algorithms. Building AI has become much easier for organizations; It's almost like introducing a new programming language or library into their environment. There are many more organizations – not just large enterprises like Tenable and others – but also any startups that can now leverage AI and integrate it into their products.
See: Gartner tells Australian IT leaders to adopt AI at their own pace
The second thing: Artificial intelligence requires a lot of data. Many organizations need to collect and store larger amounts of data, which also sometimes has higher levels of sensitivity. Previously, my streaming service saved very few details about me. Now, maybe my geography is important, because they can create more specific recommendations based on that, or my age, gender, etc. Because they can now use this data for their business purposes – to generate more business – they are now more incentivized to store that data in larger quantities and with increasing levels of sensitivity.
TechRepublic: Is this fueling the growing use of the cloud?
Liat: If you want to store a lot of data, it's easier to do it in the cloud. Every time you decide to store a new type of data, it increases the amount of data you store. You don't have to walk into your data center and ask for new amounts of data to be installed. Just click, and you'll have a new location for your data store. So the cloud has made storing data much easier.
These three components form a kind of circuit that feeds itself. Because if it's easier to store data, you can upgrade more AI capabilities, and then you're incentivized to store more data, and so on. This is what has happened in the world in the last few years – since LLMs have become a more accessible and common capability for organizations – creating challenges across all three of these sectors.
More cloud security coverage
Understanding the security risks of artificial intelligence
TechRepublic: Do you see a rise in specific cybersecurity risks with AI?
Liat: The use of AI in organizations, unlike the use of AI by individuals around the world, is still in its infancy. Organizations want to ensure that they deliver it in a way that does not create any unnecessary risks or any extreme risks. So, in terms of statistics, we still only have a few examples, and they're not necessarily a good representation because they're more empirical.
One example of risk is training AI on sensitive data. This is something we see. It is not because organizations are not careful; This is because it is very difficult to separate sensitive data from non-sensitive data and still have an effective AI machine trained on the right data set.
The second thing we see is what we call data poisoning. So, even if you have an AI agent that is trained on non-sensitive data, if that non-sensitive data is publicly exposed, as an adversary, as an attacker, I can insert my own data into that publicly exposed and publicly accessible data store and ask Your AI can say things you didn't mean to say. He is not this all-knowing entity. He knows what he saw.
TechRepublic: How should organizations weigh AI security risks?
Liat: First, I would ask how organizations can understand their level of exposure, which includes cloud, AI, data… and everything related to how they use third-party vendors, and how they leverage different software in their organization. And so on.
See: Australia proposes mandatory guardrails for AI
The second part is how do you identify critical exposures? So, if we know that it's a publicly accessible asset that has a high-risk vulnerability, that's something you probably want to address first. But it's also a combination of influence, right? If you have two issues that are very similar, and one can compromise sensitive data and the other can't, you'll want to address that one (problem) first.
You should also know what steps to take to address those exposures with minimal impact to the business.
TechRepublic: What are some of the big cloud security risks you're warning about?
Liat: There are three things we usually advise our clients to do.
The first is on misconfigurations. Just because of the complexity of the infrastructure and the complexity of the cloud and all of the technologies that it provides, even if you're in a single cloud environment — but especially if you're going to go to a multi-cloud environment — the likelihood of something becoming an issue just because it's not configured properly is still very high. This is definitely one thing I will focus on, especially when new technologies like AI are introduced.
The second is excessive privileged access. Many people believe that their organization is very secure. But if your home is a fortress, and you give your keys to everyone around you, this is still a problem. So excessive access to sensitive data, critical infrastructure, is another area of focus. Even if everything is configured perfectly and you don't have any hackers in your environment, this introduces additional risks.
The aspect that people think about a lot is identifying malicious or suspicious activities early on when they occur. This is where AI can be leveraged; Because if we leverage AI tools within our security tools within our infrastructure, we can leverage the fact that they can look at a lot of data, and they can do it very quickly, so that we can also identify suspicious or malicious behaviors in the environment. . So we can address those behaviors, those activities, as soon as possible before anything critical is compromised.
AI implementation 'too good an opportunity to miss'
TechRepublic: How do CISOs address the risks they see with AI?
Liat: I have been working in cybersecurity for 15 years. What I like to see is that most security experts, and most CISOs, are different than they were a decade ago. Instead of being gatekeepers, instead of saying, “No, we can't use this because it's too risky,” they ask themselves, “How can we use this and make it less risky?” It is a wonderful trend to see. They have become more than just an enabler.
TechRepublic: Do you see the good side of AI, as well as the risks?
Liat: Organizations need to think more about how they deliver AI, rather than thinking, “AI is too risky right now.” You can't do that.
Organizations that do not introduce AI in the next two years will remain behind. It's a great tool that can benefit many business use cases, internally for collaboration, analysis and insights, and externally for the tools we can provide to our customers. There's too good a chance to miss it. If I can help organizations achieve that mindset where they say, “Okay, we can use AI, but we just have to take these risks into account,” then I've done my job.