Fighting Fire with Fire: GenAI and Enterprise Security

Fighting Fire with Fire: GenAI and Enterprise Security

By   |  6 min read  | 

Originally published in the Economist under the title: “Gen AI and enterprise security: fighting fire with fire.”

It’s a call that no CISO wants to get: After an uptick in phishing emails, the SOC at a European manufacturer reports unusual network traffic across the organisation. Hundreds of rank-and-file employees are now locked out of their workstations. Soon, a controller in the Manchester office receives a demand for an anonymous bitcoin transfer. It’s an Akira ransomware attack. In seconds, the lives of the entire cybersecurity team are turned upside-down as they scramble to restore access and prevent another attack.

While this account is fictional, the scenario plays out thousands of times every year. Ransomware attacks are on the uptick, as are phishing expeditions and other exploits. These threats are set to be supercharged by a new wave of generative AI (GenAI) tools proliferating on the Dark Web.

While GenAI is not yet able to create novel malware from scratch, it is already being used as a copilot capable of writing basic code and even impersonating existing malware such as the BumbleBee webshell.

For ransomware, if you want to customize something for your victim, that used to take roughly 12 hours to code. In today’s environment, there are tools available on the Dark Web that can actually reduce that time to as low as three hours using tools like WormGPT.

The situation will become more dire as the technology improves. Michelle Abraham, research director, security and trust at IDC, notes that GenAI has already proved to be a game changer for phishing. Five years ago, phishing emails were usually written in English (of varying quality), because that is where the most profit could be made. Not anymore.

“The threat actors didn’t write phishing emails in other languages,” Ms. Abraham explains. “Now, that’s changing. You just get GenAI to translate for you, and still come up with good language.” This has not only increased the quality of phishing emails, but also the quantity of attacks.

Zero-day attacks that exploit an unknown vulnerability are another area of concern. Security agencies comprising the Five Eyes intelligence alliance (the United States, Britain, Australia, Canada and New Zealand) recently documented a sharp increase in the number of zero-day attacks, adding that the majority of the most frequently exploited vulnerabilities were initially exploited as a zero-day in 2023, compared to less than 50% the year prior.

The sheer number of zero-day attacks is staggering: Data gathered by Palo Alto Networks found between 2.3M and 2.5M zero-day attacks every day. This is in part due to hackers leveraging AI to design and launch attacks. What used to take eight weeks now just requires a few days—or even less. 

Using AI to actually understand the vulnerability, build the code, run the exploit, and actually do it in an automated fashion, can be brought down to less than an hour.

Cybersecurity’s AI Toolbox

CISOs are quickly learning that the cybersecurity playbook of the 2010s is no longer capable of handling the threat landscape of the 2020s.

“There’s so many different parts of your IT environment, and they produce a lot of data,” says IDC’s Ms. Abraham. “It’s too much for humans to analyze.” 

Fortunately, they have access to their own AI-enabled tools to fight back. Machine learning (ML) has long been part of the cybersecurity arsenal to help identify anomalies that could indicate attacks or probes. What is different now is the ability of GenAI to provide context and help to focus the attention of human analysts—and to make better use of their limited time.

“The ability to ask systems questions in natural language, rather than needing to learn the specific search language of the tool allows analysts who aren’t as familiar with a particular tool’s language to more easily query,” Ms. Abraham says.

Palo Alto Networks PrecisionAI® leverages ML and GenAI to automate threat detection. Deep learning enables predictive threat assessments. PrecisionAI can bring in data from other vendors’ security applications, as well as Palo Alto Networks library of 4,000 ML models.

To identify and respond to threats, we can process 9 petabytes every day, gleaned from our own solutions as well as third-party data. The system can autonomously respond to 90% of threats, and notify the SOC of more complicated threats that require human intervention.

When PrecisionAI finds an incident and goes to an analyst and says, ‘this incident is bad,’ and gives it a certain score, say from zero to 100, it can explain exactly why. We know that it’s 100% accurate in that assessment. We have confidence in our data, and we believe that we can help our customers on that journey.

The Shadow AI Problem is Real

The human element can confound the best of well-laid cybersecurity plans. It might be Felix in logistics tapping a link in a text message from an unknown sender, or Emma in R&D installing an open source Mistral model on a local dev machine.

“It’s like everything with security,” says Ms Abraham. “You can train people on what they should do, but it is not always possible to make sure — absolutely sure — that they don’t.” 

Shadow AI follows the established Shadow IT phenomenon. It runs the gamut from employees using personal devices to upload corporate data to ChatGPT (“summarize this Q1 sales report”) to proof-of-concepts involving experimental AI apps.

Multiple surveys have found high levels of unsanctioned AI in enterprise environments. Salesforce data published in late 2023 showed 45% of respondents in Britain reported using unapproved GenAI tools at work, while 15% reported using banned AI tools.

The fast-evolving cybersecurity landscape, fuelled by the rise of GenAI and the persistent problem with Shadow AI, underscores the urgency for organisations to adapt their defences to counter these threats more effectively.  

There really needs to be a very strong governance framework as well as an enforcement point. In many cases, I think attackers will continue to surprise us. As such, we will have to work even harder to stay ahead of those challenges.

Curious about what else Haider has to say? Check out his other articles on Perspectives

STAY CONNECTED

Connect with our team today