By now, you’ve heard about the latest frontier AI models that are remarkably good at finding vulnerabilities in code and creating potential exploits. So good, in fact, that these models have been significantly limited from general use in an attempt to give defenders time to find and fix vulnerabilities before attackers find and exploit them.
For context, on April 7, 2026, we began testing Anthropic’s Claude Mythos model as a launch partner for Project Glasswing. Our conclusion was clear: The latest models are extraordinarily capable at finding vulnerabilities and changing them into critical exploit paths in near-real-time. In Defender's Guide to the Frontier AI Impact on Cybersecurity, I shared our early findings and recommendations.
Since then, we’ve continued testing the latest frontier AI models, including Anthropic’s Mythos and Claude Opus 4.7 and OpenAI’s GPT-5.5-Cyber as part of the Trusted Access for Cyber program. The big question just a few weeks ago was: “Are we overstating the model capabilities?” With more testing, I can confidently say we weren’t. In fact, these models are likely even better at finding vulnerabilities than we initially realized. Today, we’re providing an update on our ongoing research, our learnings uncovered in the process, and the approach we’re taking to protect our customers.
Find and Fix Before Attackers Find and Exploit
Today, we released our May “Patch Wednesday” security advisories, our monthly cadence of transparent vulnerability disclosure and remediation. This is the first time where the majority of findings were the result of frontier AI models scanning our code.
- These are the results of the full, initial scan of over 130 products across all three platforms.
- As of today, we’ve patched all important vulnerabilities in our SaaS delivered products, and all customer-operated products now have patches available.
- Today’s advisory covers 26 CVEs (representing 75 issues) versus our usual volume (typically less than 5 CVEs in a month); none of which are being exploited in the wild. Note, this excludes CyberArk vulnerabilities, which are disclosed in their normal process.
It's important to understand this isn’t a one-and-done situation. We’re now rescanning, applying all our learnings about how to provide the right context and threat intelligence to the models. We intend to fix every vulnerability we find before advanced AI capabilities become widely available to adversaries.
While incredibly powerful, AI models aren’t simply magic. To achieve high-fidelity results, you need to build AI scanning harnesses, leverage context, guardrails and threat intelligence. We’ve also discovered a variance across models, due to variations in their training. A multimodel approach is required to identify the superset of vulnerabilities. And finally, while the immediate priority is finding and fixing the vulnerabilities that organizations currently have, the longer-term shift is incorporating these models directly into the software development lifecycle. This is the light at the end of the tunnel: A future where software is secure by design.
Four Steps Every Organization Needs to Take Immediately
Regardless of the current restricted access, we believe these capabilities will flow more broadly to other models. We now estimate a narrow three-to-five-month window for organizations to outpace the adversary before AI-driven exploits start to become the new norm. This impending vulnerability deluge demands urgency. Organizations that haven’t put appropriate safeguards in place will face an entirely new class of risk. Here’s what we recommend:
- Find and Fix Vulnerabilities In Your Applications, Products and Code
Find and fix before attackers find and exploit.- Leverage AI models to identify vulnerabilities across all codebase.
- Apply the same AI scanning to your open-source supply chain, and remediate or mitigate findings.
- Run accelerated patching tightly coordinated with product and development teams.
- Assess, Reduce and Remediate Your Exposure
Reduce what is reachable by attackers, secure what must be accessible, such as customer-facing applications.- Attack surface management products, like Cortex Xpanse®, have never been more critical for finding and reducing exposure.
- The latest frontier AI models are very adept (with the right AI scanning harness) at evaluating exposures, understanding security misconfigurations and prioritizing attack-path reachability.
- Audit your supply chain, including AI infrastructure, runtime environments and model dependencies.
- Ensure Attack Protections
Vulnerability exploits are typically just one step of a multi-step attack lifecycle. Ensuring best-in-class protections is now even more important for preventing breaches.- Map current sensor coverage to identify critical blind spots in detection, prevention and telemetry.
- Deploy best-in-class XDR everywhere with an emphasis on real-time ML-based detection and prevention of attacks with all hosts on-premises and cloud included.
- Deploy Agentic Endpoint Security to secure wide-scale adoption of vibe coding and AI security across the enterprise (e.g. Prisma AIRS® and our recent acquisition of Koi are now a necessity for securing the agentic endpoint).
- Secure enterprise browsers with AI-based security are a must have for securing where users now do their work.
- Zero trust and Identity Security are foundational to securing every user and connection, extending to internal segmentation and outbound application connections.
- Deploy Real-Time Security Operations
Autonomous AI-driven attacks will drive attack lifecycles to minutes requiring every SOC to achieve single-digit mean time to detect (MTTD) and mean time to respond (MTTR).- Attack detections must be AI/ML-driven to detect even frequently changing and novel attacks at scale.
- These AI detections must operate against a wide range of first party and third party data sources. A best in class AI SOC must operate on ALL relevant data sources.
- Automation, both natively integrated and throughout the SOC lifecycle, is necessary to achieve single-digit MTTR. This automation will increasingly be agentic.
- This must be delivered as a platform to remove seams and gaps created by point solutions.
- Assess and act as quickly as possible.
Fighting AI with AI — AI Frontier Security Innovations Coming Soon
So far, frontier AI models only find new attacks, not new attack techniques. This means that with the right innovations, we can expand our use of AI to solve the security challenges that organizations are facing, and deliver what our customers need to stay ahead of the ever-evolving threat landscape, including:
- Reimagining virtual patching with proactive, high-fidelity content updates across network, endpoint and cloud security – We expect that across open source and technology suppliers there will be a deluge of patches, and virtual patching will provide a mitigation layer necessary to give your teams time to update. We expect to roll out the first phase of capabilities very soon.
- Enhanced attack preventions, including cyber-LLM trained ML and small language models (SML) and behavior protections – Early testing with Cortex XDR® and our network security security services, such as WildFire® malware prevention, indicate high protection coverage from the types of attacks created using these new frontier AI models.
- Using these models to scan our code, applications and even security configurations – Our intention is to productize these capabilities and incorporate them into our platforms.
Unit 42 — We’re Here to Help
We recognize that not everyone has the capacity and/or expertise to action all of the recommendations to effectively counter frontier AI-driven risks in the short timeframe mandated by AI innovation. Our Unit 42 Frontier AI Defense service is designed to discover and remediate your current exposure before attackers do, strengthen controls that reduce exposure and contain impact and modernize security operations so teams can detect and respond at machine speed.
This is a pivotal moment for our industry. While the scale of the challenge presented is real, I’m confident in our ability to solve it. We’re here to help our customers navigate this transition and ensure that as the landscape continues to evolve, the advantage remains with the defender.
Forward-Looking Statements
This blog contains forward-looking statements that involve risks, uncertainties and assumptions, including, without limitation, statements regarding the benefits, impact, or performance or potential benefits, impact or performance of our products and technologies or future products and technologies. These forward-looking statements are not guarantees of future performance, and there are a significant number of factors that could cause actual results to differ materially from statements made in this blog. We identify certain important risks and uncertainties that could affect our results and performance in our most recent Annual Report on Form 10-K, our most recent Quarterly Report on Form 10-Q, and our other filings with the U.S. Securities and Exchange Commission from time-to-time, each of which are available on our website at investors.paloaltonetworks.com and on the SEC's website at www.sec.gov. All forward-looking statements in this blog are based on information available to us as of the date hereof, and we do not assume any obligation to update the forward-looking statements provided to reflect events that occur or circumstances that exist after the date on which they were made.