The early weeks of 2026 have already made one thing clear: Government cybersecurity is in a new phase, shaped not by incremental change, but by the rapid integration of AI into core public-sector missions. AI systems are now embedded in critical infrastructure, federal service delivery, research environments, as well as state and local operations. At the same time, nation-state adversaries are leveraging AI to accelerate intrusion, scale deception and manipulate trusted systems in ways not possible even a year ago.
As Senior Vice President of Public Sector at Palo Alto Networks, I see a decisive shift underway. Defending the public sector in 2026 means navigating a world where security depends on verifying identity, securing data and governing AI-driven systems that act without human intervention. Success now hinges on architectures that assume automation, operations that prioritize coordination, and governance frameworks capable of managing AI at mission scale.
Here are the developments that will define the year ahead.
Federal Government
1. AI-Native Security Must Become Integral to Federal Operations
AI in federal environments is no longer an experiment. Agencies are now designing workflows, SOC missions and cloud architectures around AI-driven detection and response. The emphasis is shifting from supplementing human analysts to building systems that maintain visibility, correlate threats, and respond autonomously when human capacity is limited. This builds on what we forecasted last year, when federal cybersecurity teams began using AI to replace manual workflows and drive down detection and response times.
The shift will be practical. Federal teams must plan to deploy AI systems that correlate logs, identify behavioral anomalies, prioritize threats, and suppress noise before analysts ever see an alert. Manual, ticket-based workflows will no longer meet federal timelines for investigation or reporting, particularly as adversaries automate more phases of attack.
2. Identity Emerges as the Central Federal Security Challenge
The biggest shift in 2026 will be the collapse between “identity” and “attack surface.” Deepfake technologies now operate in real time. AI-generated voices and video can impersonate senior leaders at a level undetectable by traditional controls. Machine identities continue to proliferate; they will outnumber human identities this year. And autonomous agents can initiate high-impact actions without human oversight. This reflects a broader crisis of authenticity now reshaping how enterprises defend identity itself.
Identity abuse will no longer be limited to credential theft. This turns identity into a systemic risk. One compromised identity (human, machine or agent) can cascade through automated systems with little friction. Federal programs will need to prioritize continuous identity verification, stronger proofing and governance frameworks that validate the legitimacy of both human and AI-driven activity.
3. AI Systems Must Be Secure-by-Design
Stemming from the clear mandate in the AI Action Plan (and subsequent work by NIST to develop an AI/Cyber Profile on top of the existing Cybersecurity Framework) agencies will steadily integrate AI security into their deployment of AI technologies.
This imperative is critical as AI systems are susceptible to novel threats. Data poisoning of training sets, manipulated inputs and hidden instructions in untrusted datasets compromise the intelligence that agencies rely on for analysis, planning and mission support. To support the security of this AI-first moment, Palo Alto Networks was proud to make its AI security platform, Prisma® AIRS™, available through the GSA OneGov initiative.
4. Nation-State Operations Expand Through AI Automation
Adversaries will use AI to compress the time between reconnaissance, exploitation and lateral movement. We expect rapidly increasing the use of AI to chain vulnerabilities, tailor social engineering campaigns, and generated malware variants that adapt in real time.
The focus will broaden beyond IT networks. AI will be used to disrupt OT systems and target sensitive research environments. Foreign intelligence services will weaponize AI to blur the line between intrusion and information operations, producing hybrid campaigns that attack both systems and the legitimacy of institutions.
5. Autonomous SOC Capabilities Become Essential
Federal SOCs will evolve from human-centered command centers to hybrid operations where autonomous agents run major components of the detection and response mission. These agents will triage alerts, enforce containment, and initiate predefined responses.
This evolution comes with risk. AI agents with broad authority can be misused or manipulated if not properly governed. Agencies will need safeguards to track agent behavior, enforce least privilege on agents, and prevent misuse through runtime monitoring and “AI firewall” controls designed to stop malicious prompts and unauthorized actions. The same pressures are shaping enterprise security, where controls like AI firewalls and circuit breaker mechanisms are becoming standard practice. Automation will only strengthen federal security if paired with rigorous oversight and continuous validation of agent activity.
6. Shared and Federated SOC Structures Gain Momentum
As threats scale, agencies will increasingly operate through shared or federated security structures. Instead of isolated SOCs, agencies will adopt analytics layers capable of correlating activity across departments and exchanging findings in real time.
This shift will reduce redundancy and provide faster insight into nation-state campaigns that cross federal boundaries. Early adopters will establish shared analytic and response frameworks that allow agencies to coordinate without sacrificing mission-specific control. Civilian agencies will lead early adoption with broader participation across defense and national security stakeholders expected later in the year.
7. The Post-Quantum Deadline Becomes Immediate
In 2026, post-quantum cryptography planning will move to implementation. Accelerated advances in quantum computing and AI-based cryptanalysis will push agencies to transition from pilot efforts to mandated modernization.
Agencies will focus on discovering where vulnerable algorithms are used, replacing outdated libraries, and implementing crypto-agility so systems can evolve without major redesigns. Systems with unpatchable cryptographic components will be flagged for full replacement, forcing agencies to reconcile years of accumulated “crypto debt.”
8. Data Trust and Cloud Workload Protection Become Priority Missions
The rise of AI workloads will force agencies to rethink how they protect data. Infrastructure controls alone cannot detect when training data has been manipulated or when model outputs no longer reflect real-world conditions.
Agencies will unify developer and security workflows and use tools like Data Security Posture Management and AI security posture management (AI-SPM) to track data lineage and enforce protections at runtime. Enterprises are addressing the same issue by bringing development and security teams together under shared data governance models. Ensuring model trustworthiness will become a mission-support requirement, not just a security objective.
9. Platform Consolidation Becomes Necessary
Fragmented tools cannot support the visibility and oversight required for AI governance. Executives will push for platform consolidation to unify network, identity, cloud, endpoint and AI security. Integrated platforms will gain favor because they enable consistent policy enforcement and a single operational picture across increasingly automated environments.
State, Local and Educational Institutions
1. AI Adoption Splits SLED into Distinct Tiers
In 2026, disparities in funding and technical capacity will widen. Some states will deploy AI across security operations, citizen services and identity verification. Others will struggle to maintain legacy systems.
Well-resourced jurisdictions will reduce response times and improve resilience. Underfunded ones will remain exposed to ransomware and disruption. Without targeted modernization efforts, a national divide in SLED cybersecurity maturity will deepen.
2. Regional Models Become the Practical Path Forward
Silos are no longer sustainable. SLED organizations will rely on shared SOCs, regional threat intelligence hubs and coordinated incident response agreements. States will formalize partnerships to share expertise, reduce costs and defend interconnected systems. This evolution represents the maturation of the “team sport” mentality we predicted in 2025. These models reflect operational reality: Compromised data or infrastructure in one jurisdiction often creates immediate risk for its neighbors.
3. Higher Education Redesigns Its Security Baseline
Universities will classify cybersecurity alongside energy, research infrastructure and physical security as essential institutional functions. Secure browser adoption, stronger vendor oversight and centralized identity governance will become the norm.
AI research environments will receive increased scrutiny, and universities participating in federally funded research will face stricter compliance requirements to prevent data poisoning and model manipulation. Institutions with large research portfolios will prioritize securing lab environments where AI models are trained and evaluated.
4. K–12 Systems Enter a New Phase of Security Oversight
States will introduce new security mandates for K–12 environments, covering MFA, network segmentation, secure browsers, identity verification and foundational zero trust principles. AI-enabled ransomware will remain a threat. Smaller districts will adopt managed services or regional support structures as they confront growing operational and compliance demands. Districts that modernize identity controls and browser security will significantly reduce their exposure compared to those reliant on legacy tools. Building on the regulatory momentum we predicted in 2025, K–12 institutions will continue moving from defensive posture to proactive security adoption.
5. Local Governments Face Escalating AI-Driven Ransomware
Municipal governments remain high-value targets due to limited staffing and aging infrastructure. AI gives threat actors the ability to automate reconnaissance, craft targeted phishing messages, and identify vulnerabilities with little effort.
Attacks timed to public safety incidents or weather emergencies will increase, meaning local governments will need stronger identity controls, automated endpoint protection and access to managed detection and response. Operational continuity will depend on reducing time-to-detect and time-to-contain, capabilities that smaller municipalities cannot achieve without external support.
6. Managed Services and Platform Consolidation Become Standard
As technical demands grow, SLED organizations will move toward managed SOC models and consolidated vendor ecosystems. Platforms that integrate data protection, threat detection, identity governance and AI oversight will gain traction. Point tools without interoperability will decline. Budget-constrained environments will favor comprehensive platforms that reduce operational burden and simplify compliance.
7. Identity and Data Trust Become Central SLED Priorities
SLED organizations manage sensitive student records, election data and social services information. These environments are increasingly strained by the rapid growth of machine identities and AI-driven applications.
Synthetic identities and AI-generated credentials will be used to infiltrate systems with limited oversight. Continuous identity verification, data lineage tracking and posture management will become essential to prevent fraud, service disruption and data manipulation. Identity assurance and data integrity will become the foundation of public trust at the state and local level.