Reflections from Aspen
The Summer 2025 meeting of the Aspen US Cybersecurity Group was a valuable gathering of minds from government, industry and academia. I had the privilege to attend and provide opening remarks for a session at the event, which provided a critical forum for candid discussions on the most pressing challenges facing our digital world. As the cybersecurity landscape is reshaped by the rapid evolution of artificial intelligence, these conversations were more vital than ever.
As I reflect on the key takeaways from the meeting, three strategic themes resonate most strongly, and are insights that will fundamentally guide our approach to cyber resilience in the years to come.
We Are in the AI “Before Times”
A core consensus emerged in Aspen: while defensive AI capabilities currently hold an advantage, this could be a fleeting moment if we do not continue to innovate and focus on secure by design principles when it comes to AI. We are unequivocally in the "before times" of AI-driven cyber conflict. The pace at which offensive AI is advancing is extraordinary, and this has the potential to shift the balance of power.
We are in a new, demanding era of cyberwarfare. The heightened geopolitical tensions, rapid adoption of AI, and expansion of remote work have created an exponential increase in our attack surface. This is a monumental shift that requires us to rethink our entire approach to cybersecurity.
Our Unit 42 Threat Intelligence team has been running simulations that provide a stark glimpse into this future. By leveraging advanced AI, we have demonstrated the capability to execute an entire attack chain, from initial access to data exfiltration, in as little as 25 minutes. This level of speed and automation far surpasses the capabilities of human-led operations, highlighting a future where the decisive advantage will belong to the side with the most sophisticated AI.
What I believe was particularly critical were insights detailing how attackers are not simply automating old tactics but rather are targeting the foundation of our AI deployments. We are seeing threat actors actively target internal large language models (LLMs) to navigate victim organizations. An attacker can use a compromised LLM to understand an organization's network architecture, identify sensitive datastores, as well as craft sophisticated social engineering attacks. But, we expect these capabilities will be further automated and sophisticated, even targeting the data our models are trained on. This represents a new, high-stakes battleground, and securing our AI infrastructure is now an essential prerequisite for organizational security.
AI Creates a New Class of "Insider Threat"
The concept of an insider threat has historically centered on malicious or compromised employees. However, the discussions in Aspen highlighted a new and equally dangerous category: the autonomous AI agent. When these agents, which can operate independently within a network, are compromised, they become a novel and highly potent form of insider threat.
This new threat vector is emerging alongside a significant increase in a more traditional but still evolving tactic: Nation-state actors using fraudulent remote worker identities to breach organizations. We’ve seen this tactic, often attributed to groups in North Korea, surge dramatically. The data we shared with the group from our 2025 Unit 42 Global Incident Response Report was sobering. Cases of nation-state actors using these fraudulent personas tripled year-over-year in 2024. These groups are leveraging generative AI and deepfake technologies to create synthetic identities, making them incredibly difficult to detect.
This blending of threats, from autonomous agents to sophisticated human impersonators, fundamentally challenges our existing security models. Once compromised, an AI agent possesses the ultimate insider access. It understands the network and can exfiltrate data with a speed and efficiency that a human attacker could never achieve. The boundaries between external and internal threats are blurring, compelling us to re-evaluate our zero trust principles and identity management strategies. We must collectively work to secure the "human-machine interface," the critical touchpoints where users and systems interact.
The AI Tech Stack Is the New Security Framework
As we grappled with these emerging threats, the conversation shifted toward the strategic imperative of securing our AI initiatives. A report on the AI tech stack, created by the Paladin Global Institute, resonated strongly among attendees, especially us at Palo Alto Networks. And, the AI tech stack must be our new security framework.
It’s no longer enough to simply chase threats with a disjointed solution for each one. Instead, we must embrace relentless innovation with AI at its core, to stay a step ahead. A five-layer model was presented to provide a clear, actionable roadmap for organizations. It’s a structured approach that moves beyond traditional security paradigms to address the unique complexities of AI:
- Governance Layer: This is the top-down strategic layer, focused on policies, risk management and regulatory compliance. It ensures that AI is used ethically and that clear guardrails are in place.
- Application Layer: This focuses on securing the user-facing AI applications themselves, addressing threats like prompt injection and data poisoning.
- Infrastructure Layer: This involves securing the underlying compute and storage infrastructure where AI models run, whether in the cloud or on-premises.
- Model Layer: This layer is dedicated to the integrity of the AI models, protecting against model theft, evasion attacks and ensuring reliable outputs.
- Data Layer: This is about securing the data that feeds the AI system, encompassing data privacy, access controls and ensuring data quality.
This framework was met with significant interest because it provides practical guidelines for CISOs and CIOs to discuss AI risk in a structured, comprehensive manner.
The Quantum Horizon
A critical concern that I also highlighted in my remarks is the future threat posed by quantum computing. This isn't a theoretical problem for tomorrow; it's an imperative for preparing now to protect long-term data integrity. Quantum computing poses a real threat to current encryption standards, creating the risk that sensitive encrypted data stolen today could be decrypted once quantum computers mature. This is why Palo Alto Networks is focused on post-quantum cryptography research, solutions and helping our customers prepare for this future.
A Collective Call to Action
My time at the Aspen meeting reinforced a fundamental truth I've held throughout my career: Cybersecurity is a collective challenge. Threats are evolving at an unprecedented pace, but so too is our ability to innovate and collaborate. The discussions in Aspen were a vital reminder that by sharing knowledge and working together across the public and private sectors, we can navigate the AI era and build a more resilient digital future.