For the best and worst, artificial intelligence is now everywhere in our society. And whether or not we love it, there is no return back. While we learn to coexist with it, we have the duty to reduce its risks, but also an opportunity to exploit its full potential.
Challenges and opportunities are especially clear for information security professionals. On the one hand, artificial intelligence opens the era of exploring how technology can reshape safety operations, from discovering the advanced threat to the automatic response, as well as providing data in the environments driven by artificial intelligence. On the other hand, Internet criminals are constantly increasing and capable of assuming the capabilities of artificial intelligence to formulate new and more advanced technologies.
The dark side of artificial intelligence
Netskope's threat laboratory researchers recently discovered that the clicks on the workplace ties have doubled three times in 2024, and that malicious content downloads occurred in 88 % of the monthly organizations. They also found that the common denominator in the success of these electronic threats is the rapid development of social engineering campaigns designed by attackers to deceive their victims, with the content of the result of artificial intelligence contributed significantly.
Tools like WormGPT, followed by fraud, develop into an increasing number of counterfeit chat. These tools have emerged as a dark, non -lined variables from the legitimate Genai tools and help the bad actors create more convincing email messages and write more efficient harmful programs, and the sources of inspiration for more harmful actions have become.
The appearance of artificial intelligence has nevertheless brought vocal and video, which has always been created for harmful purposes. Realistic and convincing Deepfakes' creation has become easier and faster, and we are already witnessing the efficiency of its existence for a set of criminal activities, from the targeted fraud in the workplace to mass information.
Aside from the promotion of threats, artificial intelligence also causes a headache for data protection professionals. We cannot discuss the risks of artificial intelligence without treating the risk of using Genai and the risk of sensitive data leakage. In 2024, approximately 6 % of hundreds of thousands of Australian users covered in Netskope Threat Labs analyzed regulatory data security policies every month, and a large percentage of them were attempts to enter sensitive or organized data in Genai.
It is clear that the appearance of Genai was the cornerstone of the appearance of new threat tankers. But while artificial intelligence helps to enhance the capabilities of online criminals, it provides equal contribution, if not greater, in cybersecurity techniques and practices.
Our best ally for security now and in the future
If you just read horror addresses, it may seem that the bad guys are outperforming the defense. But this is an inaccurate image. Security teams have the advantage of online criminals because the brightest minds in artificial intelligence and machine learning contribute to building and refining some wonderful safety tools for more than a decade.
Artificial intelligence has changed the threat detection game thanks to its ability to analyze and discover behaviors and patterns in real time and with high levels of development and liberation. Determine the user that clicks on a link to hunting or accessing fake login pages, or disposing of unusually- a sign of a possible compromise- sensitive data is added to the Genai router, access to harmful content or downloading it from cloud applications- these scenarios that should be covered by engines discovering threats that work with AI materials.
In addition to the detection, trained algorithms bring well and independent prevention of threats and respond to the table. Data loss tools (DLP) automatically automatically user procedures if data protection policies are violated, for example, by trying to send secret information to personal accounts. Real-time user training tools bring a sophisticated approach-and a related complement to cybersecurity in spreading best practices-discovering unwanted behaviors when they occur and present users with a popup if they are taking risky action. Users are given a context of politics, or they were asked whether they want to “accept risk” (most of them no), or direct them to an alternative, or ask to justify their procedures and receive a policy exemption – whatever the security team chooses it.
When considering tools and policy identification, security leaders need to ensure that all possible scenarios faced by their employees. Information does not always depend on the text, and in fact, 20 % of sensitive data is represented in images such as images or screen taking. AI's strong -trained algorithms for this specific purpose can now discover potential data leaks that appear in photos or videos.
If these capabilities look amazing, think that we only scratched the surface. The amount of research and development in this field is enormous and new jobs are offered to cloud safety services constantly, allowing institutions to maintain much faster than the old -devices that allowed them.
The bottom line? Artificial intelligence brings amazing capabilities in security, which is our best ally and in the future, to defend against modern and constantly emerging threats, including those that involve artificial intelligence itself. Teams like Netskope ai Labs for years were benefiting from artificial intelligence and ML and included them in the heart of modern security platforms.
Bob will be at the Gartner Security and Risk Management in Sydney on March 3 to discuss this topic more.
See below for more information.