Agentic AI and the Looming Board-Level Security Crisis

Sep 29, 2025
6 minutes

In the past year, my team and I have spoken to over 3,000 of Europe’s top business leaders, and these conversations have led me to a stark conclusion: Three out of four current agentic AI projects are on track to experience significant security challenges.

The hype, and resulting FOMO, around AI and agentic AI has led many organisations to run before they’ve learned to walk in this emerging space. It’s no surprise how Gartner expects agentic AI cancellations to rise through 2027 or that an MIT report shows most enterprise GenAI pilots already failing. The situation is even worse from a cybersecurity perspective, with only 6% of organizations leveraging an advanced security framework for AI, according to Stanford.

But the root issue isn’t bad code, it’s bad governance. Unless boards instill a security mindset from the outset and urgently step in to enforce governance while setting clear outcomes and embedding guardrails in agentic AI rollouts, failure is inevitable.

From Answers to Actions

Agentic AI changes the centre of gravity. It marks a fundamental shift from AI giving answers to AI taking actions. That shift brings speed and scale but it also moves the control surface to identity, privilege and oversight. Agentic AI success depends less on lines of code and more on lines of accountability across the boardroom. Code quality matters, but authority determines the scale of impact.

Why Agentic Fails

Agentic AI often stumbles because of a governance gap. Too many programmes sit with a single function, where agentic AI is treated as a CIO project rather than an enterprise initiative with board ownership. Security, risk, legal, operations and the business arrive late, if at all. The result is that decisions drift, shadow builds appear and nobody owns the full picture.

Another common fault line is outcome drift. Projects are launched without clear, measurable business and security outcomes. Teams start with a tool and only afterwards backfill a reason. Budgets stretch, pilots creep into production and, before long, nobody can show a board-approved measure of success or a risk threshold.

Finally, there are few and untested guardrails. Agents launch with excessive privileges, access to sensitive data, thin identity checks and little to no real-time verification model to limit access. There is no “seatbelt,” no zero trust for AI. Controls that looked fine on day one inevitably decay and fail as integrations grow.

A Boardroom Blueprint That Works

To counter these risks, enterprises must establish governance with teeth. Agentic AI should be treated as an enterprise-wide initiative with security at its core. That means to create some form of Agentic Governance Council (a cross-functional body that oversees all agentic AI activity across the enterprise). It would meet monthly, report to the board each quarter and hold all decision rights. Shared accountability helps ensure projects are resilient, compliant and strategically aligned. If you have a Chief AI Officer, make them accountable for the register of agents, data, privilege, owners and controls.

It’s also essential to define and limit outcomes. Programmes should begin with measurable business goals and risk awareness, not the tech. Establish two or three board-approved objectives that matter, and then set risk indicators, prohibited actions and security benchmarks to prevent wasted investment. Designs can be reverse-engineered from those boundaries. If an action cannot be tied to an authorised identity with an auditable purpose, it does not execute.

Lastly, organisations must build guardrails on day one. Trust must be balanced with control through zero trust principles. Privilege controls and identity-first security are the new guardrails for agentic AI. Agents should be treated as identities, unifying human, machine and agent identities under one policy. Enforce least privilege, short-lived credentials and a separation of duties. Use subagents for risky steps; keep a person in the loop for irreversible actions and capture logs for every decision and call. Secure-by-design ensures innovation without exposing the enterprise to avoidable risks. Think of it this way: Would you give an intern unrestricted access? Then why give it to an AI agent? Remember, autonomy for agentic systems must be earned, not assumed.

Checks You Can Run This Quarter

Governance

  • Form the Agentic Governance Council and publish its remit.
  • Maintain a live register of agents, data access, owners and controls.
  • Run premortems, Red team prompts and scenario tests before go-live.
  • Review incidents and near-misses with the board each quarter.

Outcomes

  • Approve measurable goals per use case with thresholds for harm.
  • List irreversible actions that always need a human decision.
  • Map each action to an identity and a purpose code.

Guardrails

  • Unify identities for people, services and agents.
  • Apply least privilege, expiring credentials and subagent patterns.
  • Mandate signed requests and responses with continuous monitoring.
  • Keep a person in the loop for release steps or customer-impacting changes.

The Agentic AI Landscape

Complexity is rising. Fragmented stacks hide blind spots across network, cloud, SaaS and OT. Unit 42 reporting shows most incidents now span multiple attack surfaces. Trust remains too high in too many places, with overpermissioned accounts still the norm in cloud estates. Response times are measured in days, when minutes should be the goal. Agentic projects can amplify each of these weaknesses unless they are checked from inception.

Where Palo Alto Networks Fits

Securing your AI innovation isn’t hard. Our role is to move the focus away from complexity and toward a natively integrated, innovative, cost-effective and real-time outcome for our customers. We call this approach “platformization.” By merging this approach with a zero trust security culture, organizations can secure AI projects for employees and developers. We introduced Prisma® AIRS™ to give CXOs back the visibility, control and compliance toward GenAI and Agentic systems, applying one policy across their complete estate, with 100% visibility across their attack surface.

What Good Looks Like

Agentic AI succeeds only when embedded in enterprise-wide programmes with clear ownership and oversight. Governance is present from the first design doc to the quarterly review, outcomes are clear and guardrails are built in from the outset. Strong delivery happens when security is a core partner throughout. The work starts in the boardroom and then flows through architecture, engineering and operations.

Learn more about how Prisma AIRS, the world's most comprehensive AI security platform, helps customers secure all apps, agents, models and data.

Key Takeaways

  • Agentic AI projects face significant security challenges due to poor governance. Many organizations are rushing into agentic AI without establishing a strong security mindset, leading to a high rate of project cancellations and failures. The core issue isn't bad code, but a lack of board-level oversight and clear governance.
  • Agentic AI shifts the focus from answers to actions, increasing the need for robust control and accountability. This shift brings speed and scale but also moves the control surface to identity, privilege and oversight. Success hinges on clear lines of accountability across the boardroom, not just code quality.
  • A "boardroom blueprint" for agentic AI success involves establishing governance, defining and limiting outcomes, and building guardrails from day one. This includes forming an Agentic Governance Council, setting measurable business goals with risk awareness, and implementing zero trust principles with unified identities, least privilege and continuous monitoring for agents.

 


Subscribe to the Blog!

Sign up to receive must-read articles, Playbooks of the Week, new feature announcements, and more.