[](https://www.paloaltonetworks.com/?ts=markdown) * Sign In * Customer * Partner * Employee * [Login to download](https://www.paloaltonetworks.com/login?ts=markdown) * [Join us to become a member](https://www.paloaltonetworks.com/login?screenToRender=traditionalRegistration&ts=markdown) * EN * [USA (ENGLISH)](https://www.paloaltonetworks.com) * [AUSTRALIA (ENGLISH)](https://www.paloaltonetworks.com.au) * [BRAZIL (PORTUGUÉS)](https://www.paloaltonetworks.com.br) * [CANADA (ENGLISH)](https://www.paloaltonetworks.ca) * [CHINA (简体中文)](https://www.paloaltonetworks.cn) * [FRANCE (FRANÇAIS)](https://www.paloaltonetworks.fr) * [GERMANY (DEUTSCH)](https://www.paloaltonetworks.de) * [INDIA (ENGLISH)](https://www.paloaltonetworks.in) * [ITALY (ITALIANO)](https://www.paloaltonetworks.it) * [JAPAN (日本語)](https://www.paloaltonetworks.jp) * [KOREA (한국어)](https://www.paloaltonetworks.co.kr) * [LATIN AMERICA (ESPAÑOL)](https://www.paloaltonetworks.lat) * [MEXICO (ESPAÑOL)](https://www.paloaltonetworks.com.mx) * [SINGAPORE (ENGLISH)](https://www.paloaltonetworks.sg) * [SPAIN (ESPAÑOL)](https://www.paloaltonetworks.es) * [TAIWAN (繁體中文)](https://www.paloaltonetworks.tw) * [UK (ENGLISH)](https://www.paloaltonetworks.co.uk) * ![magnifying glass search icon to open search field](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/search-black.svg) * [Contact Us](https://www.paloaltonetworks.com/company/contact-sales?ts=markdown) * [What's New](https://www.paloaltonetworks.com/resources?ts=markdown) * [Get Support](https://support.paloaltonetworks.com/SupportAccount/MyAccount) * [Under Attack?](https://start.paloaltonetworks.com/contact-unit42.html) ![x close icon to close mobile navigation](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/x-black.svg) [![Palo Alto Networks logo](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/pan-logo-dark.svg)](https://www.paloaltonetworks.com/?ts=markdown) ![magnifying glass search icon to open search field](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/search-black.svg) * [](https://www.paloaltonetworks.com/?ts=markdown) * Products ![black arrow pointing left to go back to main navigation](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/arrow-right-black.svg) Products [AI-Powered Network Security Platform](https://www.paloaltonetworks.com/network-security?ts=markdown) * [AI Security](https://www.paloaltonetworks.com/ai-security?ts=markdown) * [Prisma AIRS](https://www.paloaltonetworks.com/prisma/prisma-ai-runtime-security?ts=markdown) * [AI Access Security](https://www.paloaltonetworks.com/sase/ai-access-security?ts=markdown) * [Cloud Delivered Security Services](https://www.paloaltonetworks.com/network-security/security-subscriptions?ts=markdown) * [Advanced Threat Prevention](https://www.paloaltonetworks.com/network-security/advanced-threat-prevention?ts=markdown) * [Advanced URL Filtering](https://www.paloaltonetworks.com/network-security/advanced-url-filtering?ts=markdown) * [Advanced WildFire](https://www.paloaltonetworks.com/network-security/advanced-wildfire?ts=markdown) * [Advanced DNS Security](https://www.paloaltonetworks.com/network-security/advanced-dns-security?ts=markdown) * [Enterprise Data Loss Prevention](https://www.paloaltonetworks.com/sase/enterprise-data-loss-prevention?ts=markdown) * [Enterprise Device Security](https://www.paloaltonetworks.com/network-security/enterprise-device-security?ts=markdown) * [Medical Device Security](https://www.paloaltonetworks.com/network-security/medical-device-security?ts=markdown) * [Next-Gen Trust Security](https://www.paloaltonetworks.com/network-security/next-gen-trust-security?ts=markdown) * [OT Device Security](https://www.paloaltonetworks.com/network-security/ot-device-security?ts=markdown) * [Next-Generation Firewalls](https://www.paloaltonetworks.com/network-security/next-generation-firewall?ts=markdown) * [Hardware Firewalls](https://www.paloaltonetworks.com/network-security/hardware-firewall-innovations?ts=markdown) * [Software Firewalls](https://www.paloaltonetworks.com/network-security/software-firewalls?ts=markdown) * [Strata Cloud Manager](https://www.paloaltonetworks.com/network-security/strata-cloud-manager?ts=markdown) * [SD-WAN for NGFW](https://www.paloaltonetworks.com/network-security/sd-wan-subscription?ts=markdown) * [PAN-OS](https://www.paloaltonetworks.com/network-security/pan-os?ts=markdown) * [Panorama](https://www.paloaltonetworks.com/network-security/panorama?ts=markdown) * [Secure Access Service Edge](https://www.paloaltonetworks.com/sase?ts=markdown) * [Prisma SASE](https://www.paloaltonetworks.com/sase?ts=markdown) * [Application Acceleration](https://www.paloaltonetworks.com/sase/app-acceleration?ts=markdown) * [Autonomous Digital Experience Management](https://www.paloaltonetworks.com/sase/adem?ts=markdown) * [Enterprise DLP](https://www.paloaltonetworks.com/sase/enterprise-data-loss-prevention?ts=markdown) * [Prisma Access](https://www.paloaltonetworks.com/sase/access?ts=markdown) * [Prisma Browser](https://www.paloaltonetworks.com/sase/prisma-browser?ts=markdown) * [Prisma SD-WAN](https://www.paloaltonetworks.com/sase/sd-wan?ts=markdown) * [Remote Browser Isolation](https://www.paloaltonetworks.com/sase/remote-browser-isolation?ts=markdown) * [SaaS Security](https://www.paloaltonetworks.com/sase/saas-security?ts=markdown) [AI-Driven Security Operations Platform](https://www.paloaltonetworks.com/cortex?ts=markdown) * [Cloud Security](https://www.paloaltonetworks.com/cortex/cloud?ts=markdown) * [Cortex Cloud](https://www.paloaltonetworks.com/cortex/cloud?ts=markdown) * [Application Security](https://www.paloaltonetworks.com/cortex/cloud/application-security?ts=markdown) * [Cloud Posture Security](https://www.paloaltonetworks.com/cortex/cloud/cloud-posture-security?ts=markdown) * [Cloud Runtime Security](https://www.paloaltonetworks.com/cortex/cloud/runtime-security?ts=markdown) * [Prisma Cloud](https://www.paloaltonetworks.com/prisma/cloud?ts=markdown) * [AI-Driven SOC](https://www.paloaltonetworks.com/cortex?ts=markdown) * [Cortex XSIAM](https://www.paloaltonetworks.com/cortex/cortex-xsiam?ts=markdown) * [Cortex XDR](https://www.paloaltonetworks.com/cortex/cortex-xdr?ts=markdown) * [Cortex AgentiX](https://www.paloaltonetworks.com/cortex/agentix?ts=markdown) * [Cortex XSOAR](https://www.paloaltonetworks.com/cortex/cortex-xsoar?ts=markdown) * [Cortex Exposure Management](https://www.paloaltonetworks.com/cortex/exposure-management?ts=markdown) * [Cortex Xpanse](https://www.paloaltonetworks.com/cortex/cortex-xpanse?ts=markdown) * [Cortex Advanced Email Security](https://www.paloaltonetworks.com/cortex/advanced-email-security?ts=markdown) * [Unit 42 Managed Detection \& Response](https://www.paloaltonetworks.com/cortex/managed-detection-and-response?ts=markdown) * [Unit 42 Managed XSIAM](https://www.paloaltonetworks.com/cortex/managed-xsiam?ts=markdown) [Next-Generation Identity Security](https://www.paloaltonetworks.com/idira?ts=markdown) * [Privileged Access Management](https://www.paloaltonetworks.com/idira/human/privileged-access-management?ts=markdown) * [Identity and Access Management](https://www.paloaltonetworks.com/idira/human/identity-and-access-management?ts=markdown) * [Endpoint Privilege Manager](https://www.paloaltonetworks.com/idira/human/endpoint-privilege-manager?ts=markdown) * [Identity Governance](https://www.paloaltonetworks.com/idira/human/identity-governance?ts=markdown) * [Workforce Password Management](https://www.paloaltonetworks.com/idira/human/workforce-password-management?ts=markdown) * [Agentic Identities](https://www.paloaltonetworks.com/idira/agentic?ts=markdown) * [Secrets Management](https://www.paloaltonetworks.com/idira/machine/secrets-management?ts=markdown) * [Unified Secrets Governance](https://www.paloaltonetworks.com/idira/machine/unified-secrets-governance?ts=markdown) * [Application Credentials Delivery](https://www.paloaltonetworks.com/idira/machine/application-credentials-delivery?ts=markdown) * [Vendor Privileged Access](https://www.paloaltonetworks.com/idira/human/vendor-privileged-access?ts=markdown) * Solutions ![black arrow pointing left to go back to main navigation](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/arrow-right-black.svg) Solutions Secure AI by Design * [Secure AI Ecosystem](https://www.paloaltonetworks.com/prisma/prisma-ai-runtime-security?ts=markdown) * [Secure GenAI Usage](https://www.paloaltonetworks.com/sase/ai-access-security?ts=markdown) Network Security * [Cloud Network Security](https://www.paloaltonetworks.com/network-security/software-firewalls?ts=markdown) * [Data Center Security](https://www.paloaltonetworks.com/network-security/data-center?ts=markdown) * [DNS Security](https://www.paloaltonetworks.com/network-security/advanced-dns-security?ts=markdown) * [Intrusion Detection and Prevention](https://www.paloaltonetworks.com/network-security/advanced-threat-prevention?ts=markdown) * [Device Security](https://www.paloaltonetworks.com/network-security/device-security?ts=markdown) * [OT Security](https://www.paloaltonetworks.com/network-security/ot-security-solution?ts=markdown) * [5G Security](https://www.paloaltonetworks.com/network-security/5g-security?ts=markdown) * [Secure All Apps, Users and Locations](https://www.paloaltonetworks.com/sase/secure-users-data-apps-devices?ts=markdown) * [Secure Branch Transformation](https://www.paloaltonetworks.com/sase/secure-branch-transformation?ts=markdown) * [Secure Work on Any Device](https://www.paloaltonetworks.com/sase/secure-work-on-any-device?ts=markdown) * [VPN Replacement](https://www.paloaltonetworks.com/sase/vpn-replacement-for-secure-remote-access?ts=markdown) * [Web \& Phishing Security](https://www.paloaltonetworks.com/network-security/advanced-url-filtering?ts=markdown) Cloud Security * [Application Security Posture Management (ASPM)](https://www.paloaltonetworks.com/cortex/cloud/application-security-posture-management?ts=markdown) * [Software Supply Chain Security](https://www.paloaltonetworks.com/cortex/cloud/software-supply-chain-security?ts=markdown) * [Code Security](https://www.paloaltonetworks.com/cortex/cloud/code-security?ts=markdown) * [Cloud Security Posture Management (CSPM)](https://www.paloaltonetworks.com/cortex/cloud/cloud-security-posture-management?ts=markdown) * [Cloud Infrastructure Entitlement Management (CIEM)](https://www.paloaltonetworks.com/cortex/cloud/cloud-infrastructure-entitlement-management?ts=markdown) * [Data Security Posture Management (DSPM)](https://www.paloaltonetworks.com/cortex/cloud/data-security-posture-management?ts=markdown) * [AI Security Posture Management (AI-SPM)](https://www.paloaltonetworks.com/cortex/cloud/ai-security-posture-management?ts=markdown) * [Cloud Detection and Response (CDR)](https://www.paloaltonetworks.com/cortex/cloud-detection-and-response?ts=markdown) * [Cloud Workload Protection (CWP)](https://www.paloaltonetworks.com/cortex/cloud/cloud-workload-protection?ts=markdown) * [Web Application \& API Security (WAAS)](https://www.paloaltonetworks.com/cortex/cloud/web-app-api-security?ts=markdown) Security Operations * [Cloud Detection and Response (CDR)](https://www.paloaltonetworks.com/cortex/cloud-detection-and-response?ts=markdown) * [Security Information and Event Management](https://www.paloaltonetworks.com/cortex/modernize-siem?ts=markdown) * [Network Security Automation](https://www.paloaltonetworks.com/cortex/network-security-automation?ts=markdown) * [Incident Case Management](https://www.paloaltonetworks.com/cortex/incident-case-management?ts=markdown) * [SOC Automation](https://www.paloaltonetworks.com/cortex/security-operations-automation?ts=markdown) * [Threat Intel Management](https://www.paloaltonetworks.com/cortex/threat-intel-management?ts=markdown) * [Managed Detection \& Response](https://www.paloaltonetworks.com/cortex/managed-detection-and-response?ts=markdown) * [Attack Surface Management](https://www.paloaltonetworks.com/cortex/cortex-xpanse/attack-surface-management?ts=markdown) * [Compliance Management](https://www.paloaltonetworks.com/cortex/cortex-xpanse/compliance-management?ts=markdown) * [Internet Operations Management](https://www.paloaltonetworks.com/cortex/cortex-xpanse/internet-operations-management?ts=markdown) * [Extended Data Lake (XDL)](https://www.paloaltonetworks.com/cortex/cortex-xdl?ts=markdown) * [Agentic Assistant](https://www.paloaltonetworks.com/cortex/cortex-agentic-assistant?ts=markdown) Endpoint Security * [Endpoint Protection](https://www.paloaltonetworks.com/cortex/endpoint-protection?ts=markdown) * [Extended Detection \& Response](https://www.paloaltonetworks.com/cortex/detection-and-response?ts=markdown) * [Ransomware Protection](https://www.paloaltonetworks.com/cortex/ransomware-protection?ts=markdown) * [Digital Forensics](https://www.paloaltonetworks.com/cortex/digital-forensics?ts=markdown) Identity Security * [Human Identities](https://www.paloaltonetworks.com/idira/human?ts=markdown) * [Machine Identities](https://www.paloaltonetworks.com/idira/machine?ts=markdown) * [Agentic Identities](https://www.paloaltonetworks.com/idira/agentic?ts=markdown) [Industries](https://www.paloaltonetworks.com/industry?ts=markdown) * [Public Sector](https://www.paloaltonetworks.com/industry/public-sector?ts=markdown) * [Financial Services](https://www.paloaltonetworks.com/industry/financial-services?ts=markdown) * [Manufacturing](https://www.paloaltonetworks.com/industry/manufacturing?ts=markdown) * [Healthcare](https://www.paloaltonetworks.com/industry/healthcare?ts=markdown) * [Small \& Medium Business Solutions](https://www.paloaltonetworks.com/industry/small-medium-business-portfolio?ts=markdown) * Services ![black arrow pointing left to go back to main navigation](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/arrow-right-black.svg) Services [Threat Intel and Incident Response Services](https://www.paloaltonetworks.com/unit42?ts=markdown) * [Assess](https://www.paloaltonetworks.com/unit42/assess?ts=markdown) * [Frontier AI Defense](https://www.paloaltonetworks.com/unit42/ai-advantage?ts=markdown) * [AI Security Assessment](https://www.paloaltonetworks.com/unit42/assess/ai-security-assessment?ts=markdown) * [Attack Surface Assessment](https://www.paloaltonetworks.com/unit42/assess/attack-surface-assessment?ts=markdown) * [Breach Readiness Review](https://www.paloaltonetworks.com/unit42/assess/breach-readiness-review?ts=markdown) * [BEC Readiness Assessment](https://www.paloaltonetworks.com/bec-readiness-assessment?ts=markdown) * [Cloud Security Assessment](https://www.paloaltonetworks.com/unit42/assess/cloud-security-assessment?ts=markdown) * [Compromise Assessment](https://www.paloaltonetworks.com/unit42/assess/compromise-assessment?ts=markdown) * [Cyber Risk Assessment](https://www.paloaltonetworks.com/unit42/assess/cyber-risk-assessment?ts=markdown) * [M\&A Cyber Due Diligence](https://www.paloaltonetworks.com/unit42/assess/mergers-acquisitions-cyber-due-diligence?ts=markdown) * [Penetration Testing](https://www.paloaltonetworks.com/unit42/assess/penetration-testing?ts=markdown) * [Purple Team Exercises](https://www.paloaltonetworks.com/unit42/assess/purple-teaming?ts=markdown) * [Ransomware Readiness Assessment](https://www.paloaltonetworks.com/unit42/assess/ransomware-readiness-assessment?ts=markdown) * [SOC Assessment](https://www.paloaltonetworks.com/unit42/assess/soc-assessment?ts=markdown) * [Supply Chain Risk Assessment](https://www.paloaltonetworks.com/unit42/assess/supply-chain-risk-assessment?ts=markdown) * [Tabletop Exercises](https://www.paloaltonetworks.com/unit42/assess/tabletop-exercise?ts=markdown) * [Unit 42 Retainer](https://www.paloaltonetworks.com/unit42/retainer?ts=markdown) * [Respond](https://www.paloaltonetworks.com/unit42/respond?ts=markdown) * [Cloud Incident Response](https://www.paloaltonetworks.com/unit42/respond/cloud-incident-response?ts=markdown) * [Digital Forensics](https://www.paloaltonetworks.com/unit42/respond/digital-forensics?ts=markdown) * [Incident Response](https://www.paloaltonetworks.com/unit42/respond/incident-response?ts=markdown) * [Managed Detection and Response](https://www.paloaltonetworks.com/unit42/respond/managed-detection-response?ts=markdown) * [Managed Threat Hunting](https://www.paloaltonetworks.com/unit42/respond/managed-threat-hunting?ts=markdown) * [Managed XSIAM](https://www.paloaltonetworks.com/cortex/managed-xsiam?ts=markdown) * [Unit 42 Retainer](https://www.paloaltonetworks.com/unit42/retainer?ts=markdown) * [Transform](https://www.paloaltonetworks.com/unit42/transform?ts=markdown) * [IR Plan Development and Review](https://www.paloaltonetworks.com/unit42/transform/incident-response-plan-development-review?ts=markdown) * [Security Program Design](https://www.paloaltonetworks.com/unit42/transform/security-program-design?ts=markdown) * [Virtual CISO](https://www.paloaltonetworks.com/unit42/transform/vciso?ts=markdown) * [Zero Trust Advisory](https://www.paloaltonetworks.com/unit42/transform/zero-trust-advisory?ts=markdown) [Global Customer Services](https://www.paloaltonetworks.com/services?ts=markdown) * [Education \& Training](https://www.paloaltonetworks.com/services/education?ts=markdown) * [Professional Services](https://www.paloaltonetworks.com/services/consulting?ts=markdown) * [Success Tools](https://www.paloaltonetworks.com/services/customer-success-tools?ts=markdown) * [Support Services](https://www.paloaltonetworks.com/services/solution-assurance?ts=markdown) * [Customer Success](https://www.paloaltonetworks.com/services/customer-success?ts=markdown) [![](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/logo-unit-42.svg) UNIT 42 RETAINER Custom-built to fit your organization's needs, you can choose to allocate your retainer hours to any of our offerings, including proactive cyber risk management services. Learn how you can put the world-class Unit 42 Incident Response team on speed dial. Learn more](https://www.paloaltonetworks.com/unit42/retainer?ts=markdown) * Partners ![black arrow pointing left to go back to main navigation](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/arrow-right-black.svg) Partners NextWave Partners * [NextWave Partner Community](https://www.paloaltonetworks.com/partners?ts=markdown) * [Cloud Service Providers](https://www.paloaltonetworks.com/partners/nextwave-for-csp?ts=markdown) * [Global Systems Integrators](https://www.paloaltonetworks.com/partners/nextwave-for-gsi?ts=markdown) * [Technology Partners](https://www.paloaltonetworks.com/partners/technology-partners?ts=markdown) * [Service Providers](https://www.paloaltonetworks.com/partners/service-providers?ts=markdown) * [Solution Providers](https://www.paloaltonetworks.com/partners/nextwave-solution-providers?ts=markdown) * [Managed Security Service Providers](https://www.paloaltonetworks.com/partners/managed-security-service-providers?ts=markdown) Take Action * [Portal Login](https://www.paloaltonetworks.com/partners/nextwave-partner-portal?ts=markdown) * [Managed Services Program](https://www.paloaltonetworks.com/partners/managed-security-services-provider-program?ts=markdown) * [Become a Partner](https://paloaltonetworks.my.site.com/NextWavePartnerProgram/s/partnerregistration?type=becomepartner) * [Request Access](https://paloaltonetworks.my.site.com/NextWavePartnerProgram/s/partnerregistration?type=requestaccess) * [Find a Partner](https://paloaltonetworks.my.site.com/NextWavePartnerProgram/s/partnerlocator) [CYBERFORCE CYBERFORCE represents the top 1% of partner engineers trusted for their security expertise. Learn more](https://www.paloaltonetworks.com/cyberforce?ts=markdown) * Company ![black arrow pointing left to go back to main navigation](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/arrow-right-black.svg) Company Palo Alto Networks * [About Us](https://www.paloaltonetworks.com/about-us?ts=markdown) * [Management Team](https://www.paloaltonetworks.com/about-us/management?ts=markdown) * [Investor Relations](https://investors.paloaltonetworks.com) * [Locations](https://www.paloaltonetworks.com/about-us/locations?ts=markdown) * [Ethics \& Compliance](https://www.paloaltonetworks.com/company/ethics-and-compliance?ts=markdown) * [Corporate Responsibility](https://www.paloaltonetworks.com/about-us/corporate-responsibility?ts=markdown) * [Military \& Veterans](https://jobs.paloaltonetworks.com/military) [Why Palo Alto Networks?](https://www.paloaltonetworks.com/why-paloaltonetworks?ts=markdown) * [Precision AI Security](https://www.paloaltonetworks.com/precision-ai-security?ts=markdown) * [Our Platform Approach](https://www.paloaltonetworks.com/why-paloaltonetworks/platformization?ts=markdown) * [Accelerate Your Cybersecurity Transformation](https://www.paloaltonetworks.com/why-paloaltonetworks/nam-cxo-portfolio?ts=markdown) * [Awards \& Recognition](https://www.paloaltonetworks.com/about-us/awards?ts=markdown) * [Customer Stories](https://www.paloaltonetworks.com/customers?ts=markdown) * [Global Certifications](https://www.paloaltonetworks.com/legal-notices/trust-center/compliance?ts=markdown) * [Trust 360 Program](https://www.paloaltonetworks.com/resources/whitepapers/trust-360?ts=markdown) Careers * [Overview](https://jobs.paloaltonetworks.com/) * [Culture \& Benefits](https://jobs.paloaltonetworks.com/en/culture/) [A Newsweek Most Loved Workplace "Businesses that do right by their employees" Read more](https://www.paloaltonetworks.com/company/press/2021/palo-alto-networks-secures-top-ranking-on-newsweek-s-most-loved-workplaces-list-for-2021?ts=markdown) * More ![black arrow pointing left to go back to main navigation](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/arrow-right-black.svg) More Resources * [Blog](https://www.paloaltonetworks.com/blog/?ts=markdown) * [Unit 42 Threat Research](https://unit42.paloaltonetworks.com/) * [Communities](https://www.paloaltonetworks.com/communities?ts=markdown) * [Content Library](https://www.paloaltonetworks.com/resources?ts=markdown) * [Cyberpedia](https://www.paloaltonetworks.com/cyberpedia?ts=markdown) * [Tech Insider](https://techinsider.paloaltonetworks.com/) * [Knowledge Base](https://knowledgebase.paloaltonetworks.com/) * [Palo Alto Networks TV](https://tv.paloaltonetworks.com/) * [Perspectives of Leaders](https://www.paloaltonetworks.com/perspectives/?ts=markdown) * [Cyber Perspectives Magazine](https://www.paloaltonetworks.com/cybersecurity-perspectives/cyber-perspectives-magazine?ts=markdown) * [Regional Cloud Locations](https://www.paloaltonetworks.com/products/regional-cloud-locations?ts=markdown) * [Tech Docs](https://docs.paloaltonetworks.com/) * [Security Posture Assessment](https://www.paloaltonetworks.com/security-posture-assessment?ts=markdown) * [Threat Vector Podcast](https://unit42.paloaltonetworks.com/unit-42-threat-vector-podcast/) * [Packet Pushers Podcasts](https://www.paloaltonetworks.com/podcasts/packet-pusher?ts=markdown) Connect * [LIVE community](https://live.paloaltonetworks.com/) * [Events](https://events.paloaltonetworks.com/) * [Executive Briefing Center](https://www.paloaltonetworks.com/about-us/executive-briefing-program?ts=markdown) * [Demos](https://www.paloaltonetworks.com/demos?ts=markdown) * [Contact us](https://www.paloaltonetworks.com/company/contact-sales?ts=markdown) [Blog Stay up-to-date on industry trends and the latest innovations from the world's largest cybersecurity Learn more](https://www.paloaltonetworks.com/blog/) * Sign In ![black arrow pointing left to go back to main navigation](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/arrow-right-black.svg) Sign In * Customer * Partner * Employee * [Login to download](https://www.paloaltonetworks.com/login?ts=markdown) * [Join us to become a member](https://www.paloaltonetworks.com/login?screenToRender=traditionalRegistration&ts=markdown) * EN ![black arrow pointing left to go back to main navigation](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/arrow-right-black.svg) Language * [USA (ENGLISH)](https://www.paloaltonetworks.com) * [AUSTRALIA (ENGLISH)](https://www.paloaltonetworks.com.au) * [BRAZIL (PORTUGUÉS)](https://www.paloaltonetworks.com.br) * [CANADA (ENGLISH)](https://www.paloaltonetworks.ca) * [CHINA (简体中文)](https://www.paloaltonetworks.cn) * [FRANCE (FRANÇAIS)](https://www.paloaltonetworks.fr) * [GERMANY (DEUTSCH)](https://www.paloaltonetworks.de) * [INDIA (ENGLISH)](https://www.paloaltonetworks.in) * [ITALY (ITALIANO)](https://www.paloaltonetworks.it) * [JAPAN (日本語)](https://www.paloaltonetworks.jp) * [KOREA (한국어)](https://www.paloaltonetworks.co.kr) * [LATIN AMERICA (ESPAÑOL)](https://www.paloaltonetworks.lat) * [MEXICO (ESPAÑOL)](https://www.paloaltonetworks.com.mx) * [SINGAPORE (ENGLISH)](https://www.paloaltonetworks.sg) * [SPAIN (ESPAÑOL)](https://www.paloaltonetworks.es) * [TAIWAN (繁體中文)](https://www.paloaltonetworks.tw) * [UK (ENGLISH)](https://www.paloaltonetworks.co.uk) * [Contact Us](https://www.paloaltonetworks.com/company/contact-sales?ts=markdown) * [What's New](https://www.paloaltonetworks.com/resources?ts=markdown) * [Get support](https://support.paloaltonetworks.com/SupportAccount/MyAccount) * [Under Attack?](https://start.paloaltonetworks.com/contact-unit42.html) * [Demos and Trials](https://www.paloaltonetworks.com/get-started?ts=markdown) Search All * [Tech Docs](https://docs.paloaltonetworks.com/search) Close search modal [Introducing Idira, the next-generation identity security platform.](https://www.paloaltonetworks.com/idira?ts=markdown) [](https://www.paloaltonetworks.com/?ts=markdown) 1. [Cyberpedia](https://www.paloaltonetworks.com/cyberpedia?ts=markdown) 2. [Generative AI Security](https://www.paloaltonetworks.com/cyberpedia/what-is-generative-ai-security?ts=markdown) 3. [What Is Frontier AI Security?](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security?ts=markdown) Table of Contents * [What Is Generative AI Security? \[Explanation/Starter Guide\]](https://www.paloaltonetworks.com/cyberpedia/what-is-generative-ai-security?ts=markdown) * [Why is GenAI security important?](https://www.paloaltonetworks.com/cyberpedia/what-is-generative-ai-security#why-is-genai-security-important?ts=markdown) * [How does GenAI security work?](https://www.paloaltonetworks.com/cyberpedia/what-is-generative-ai-security#how-does-genai-security-work?ts=markdown) * [What are the different types of GenAI security?](https://www.paloaltonetworks.com/cyberpedia/what-is-generative-ai-security#different-types-of-genai?ts=markdown) * [What are the main GenAI security risks and threats?](https://www.paloaltonetworks.com/cyberpedia/what-is-generative-ai-security#gen-ai-risks?ts=markdown) * [How to secure GenAI in 5 steps](https://www.paloaltonetworks.com/cyberpedia/what-is-generative-ai-security#gen-ai-5-steps?ts=markdown) * [Top 12 GenAI security best practices](https://www.paloaltonetworks.com/cyberpedia/what-is-generative-ai-security#gen-ai-best-practices?ts=markdown) * [GenAI security FAQs](https://www.paloaltonetworks.com/cyberpedia/what-is-generative-ai-security#gen-ai-security-faqs?ts=markdown) * [Frontier AI Security Checklist](https://www.paloaltonetworks.com/cyberpedia/frontier-ai-security-checklist?ts=markdown) * [Inventory](https://www.paloaltonetworks.com/cyberpedia/frontier-ai-security-checklist#inventory?ts=markdown) * [Data](https://www.paloaltonetworks.com/cyberpedia/frontier-ai-security-checklist#data?ts=markdown) * [Identity](https://www.paloaltonetworks.com/cyberpedia/frontier-ai-security-checklist#identity?ts=markdown) * [Actions](https://www.paloaltonetworks.com/cyberpedia/frontier-ai-security-checklist#actions?ts=markdown) * [Monitoring](https://www.paloaltonetworks.com/cyberpedia/frontier-ai-security-checklist#monitoring?ts=markdown) * [Evaluation](https://www.paloaltonetworks.com/cyberpedia/frontier-ai-security-checklist#evaluation?ts=markdown) * [Governance](https://www.paloaltonetworks.com/cyberpedia/frontier-ai-security-checklist#governance?ts=markdown) * [Frontier AI Security Checklist FAQs](https://www.paloaltonetworks.com/cyberpedia/frontier-ai-security-checklist#faqs?ts=markdown) * [Frontier Security Implementation Roadmap](https://www.paloaltonetworks.com/cyberpedia/frontier-ai-security-implementation-roadmap?ts=markdown) * [Do You Need a Frontier AI Security Implementation Roadmap?](https://www.paloaltonetworks.com/cyberpedia/frontier-ai-security-implementation-roadmap#roadmap?ts=markdown) * [First 30 Days: Establish Visibility and Stop Uncontrolled Action](https://www.paloaltonetworks.com/cyberpedia/frontier-ai-security-implementation-roadmap#action?ts=markdown) * [Days 31 to 90: Convert Discovery Into Controls](https://www.paloaltonetworks.com/cyberpedia/frontier-ai-security-implementation-roadmap#controls?ts=markdown) * [Months 4 to 12: Mature AI Security into an Operating Capability](https://www.paloaltonetworks.com/cyberpedia/frontier-ai-security-implementation-roadmap#capability?ts=markdown) * [Frontier AI Security Implementation Roadmap FAQs](https://www.paloaltonetworks.com/cyberpedia/frontier-ai-security-implementation-roadmap#faqs?ts=markdown) * What Is Frontier AI Security? * [Why Frontier AI Security Now](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#why?ts=markdown) * [How Frontier Models Work](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#how?ts=markdown) * [Why Architecture Matters for Security](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#security?ts=markdown) * [Frontier AI Threat Model](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#frontier?ts=markdown) * [Core Security Challenges](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#core?ts=markdown) * [Frontier AI Security Controls](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#controls?ts=markdown) * [Evaluation, Red Teaming, and Assurance](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#evaluation?ts=markdown) * [Governance and Operating Model](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#governance?ts=markdown) * [Third-Party AI Risk](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#risk?ts=markdown) * [Metrics for Frontier AI Security](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#metrics?ts=markdown) * [Frontier AI Security FAQs](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#faqs?ts=markdown) * [What Is Frontier AI?](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai?ts=markdown) * [Frontier AI Explained](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai#explained?ts=markdown) * [How Do Frontier Models Work?](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai#how?ts=markdown) * [Applications and Use Cases of Frontier Models](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai#models?ts=markdown) * [What Are the Benefits of Frontier Models?](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai#what?ts=markdown) * [Challenges with Frontier AI](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai#challenges?ts=markdown) * [Frontier AI FAQs](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai#faqs?ts=markdown) # What Is Frontier AI Security? 5 min. read [Cloud Security Checklist for Defenders](https://www.paloaltonetworks.com/resources/guides/ultimate-cloud-security-checklist?ts=markdown) Table of Contents * * [Why Frontier AI Security Now](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#why?ts=markdown) * [How Frontier Models Work](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#how?ts=markdown) * [Why Architecture Matters for Security](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#security?ts=markdown) * [Frontier AI Threat Model](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#frontier?ts=markdown) * [Core Security Challenges](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#core?ts=markdown) * [Frontier AI Security Controls](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#controls?ts=markdown) * [Evaluation, Red Teaming, and Assurance](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#evaluation?ts=markdown) * [Governance and Operating Model](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#governance?ts=markdown) * [Third-Party AI Risk](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#risk?ts=markdown) * [Metrics for Frontier AI Security](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#metrics?ts=markdown) * [Frontier AI Security FAQs](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#faqs?ts=markdown) 1. Why Frontier AI Security Now * * [Why Frontier AI Security Now](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#why?ts=markdown) * [How Frontier Models Work](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#how?ts=markdown) * [Why Architecture Matters for Security](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#security?ts=markdown) * [Frontier AI Threat Model](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#frontier?ts=markdown) * [Core Security Challenges](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#core?ts=markdown) * [Frontier AI Security Controls](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#controls?ts=markdown) * [Evaluation, Red Teaming, and Assurance](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#evaluation?ts=markdown) * [Governance and Operating Model](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#governance?ts=markdown) * [Third-Party AI Risk](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#risk?ts=markdown) * [Metrics for Frontier AI Security](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#metrics?ts=markdown) * [Frontier AI Security FAQs](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security#faqs?ts=markdown) Frontier AI security protects advanced AI systems, as well as the infrastructure that runs them, the data they process, and the actions they can take through connected tools. It governs model access, weights, prompts, retrieval, agents, permissions, evaluations, monitoring, and incident response so frontier capabilities can operate without exposing enterprise systems, users, or sensitive data. ## Why Frontier AI Security Now Frontier AI security protects advanced AI systems as operational infrastructure. The scope covers the model, the data that grounds it, the tools it can call, the identities it can use, and the decisions it can influence. Earlier AI risk programs focused on generated content --- hallucinations, toxicity, bias, inappropriate disclosure. [Frontier AI](/content/pan/en_us/cyberpedia/what-is-frontier-ai) demands strict discipline because model output increasingly triggers operational action. A model connected to enterprise systems can open a ticket, query a database, summarize a confidential file, call an API, modify a cloud resource, or guide a human through a high-risk change. [OWASP's LLM Top 10 reflects that shift](/content/pan/en_us/resources/infographics/llm-applications-owasp-10) --- prompt injection and excessive agency now rank as primary risks precisely because frontier systems can act. Risk moves through the full execution path --- prompts, embeddings, retrieval stores, API calls, SaaS connectors, code interpreters, browser sessions, memory, logs, and human approvals. The model sits at the center, but exposure lives at every connection point. [NIST's Generative AI Profile](https://www.nist.gov/publications/artificial-intelligence-risk-management-framework-generative-artificial-intelligence) frames generative AI risk as a cross-sector governance issue requiring management across the AI lifecycle, and security programs need to match that scope. Boards and CISOs need a governance model built for that operating reality. AI risk surfaces as a cyber incident, privacy failure, supply chain compromise, insider event, cloud misconfiguration, or regulatory disclosure problem --- often without signaling which domain owns it. A policy document won't control an AI agent holding production credentials. A risk committee won't see exposure without logs. A SOC won't investigate AI misuse when prompts, retrieval events, tool calls, and agent actions sit outside its detection fabric. Frontier AI security belongs inside the enterprise security architecture, drawing on software security discipline, identity governance rigor, cloud security visibility, [SOC](/content/pan/en_us/cyberpedia/what-is-a-soc) response capability, privacy engineering data controls, and executive risk accountability. ## How Frontier Models Work Frontier models operate as composed systems, and security controls must span the model, context pipeline, retrieval layer, tool interfaces, identity paths, orchestration logic, and runtime telemetry. A frontier AI system typically combines a foundation model with post-training layers, safety classifiers, retrieval systems, orchestration logic, and tool interfaces. Post-training shapes behavior through instruction tuning, preference optimization, reinforcement learning, adversarial testing, and policy training. Reasoning models add a compounding control problem. They allocate compute to planning and problem-solving before responding, and current reasoning systems combine that planning capability with tool access, including web browsing, code execution, file analysis, and memory. Security teams should treat the full toolchain as attack surface. Model routing adds further complexity. A single user request may pass through intent classification, safety review, retrieval, planning, model selection, tool execution, output inspection, and policy enforcement before the user sees a response. A weakness at any step can compromise the whole workflow, which means each step needs logging, ownership, change control, and failure-mode testing. Context compounds the exposure. Frontier systems assemble working context from system instructions, user prompts, prior conversation, retrieved documents, uploaded files, tool results, memory entries, code outputs, and policy constraints --- all inside a single context window that may include attacker-controlled material. A signed system instruction and a retrieved web page don't carry equal trust, but the model receives both. A malicious document can embed hidden instructions. A retrieved policy can be stale or poisoned. A tool result can inject commands back into the model's next reasoning step. Memory requires separate governance. Persistent memory improves continuity but can preserve sensitive facts, business logic, credentials, regulated data, or adversarial instructions across sessions. Controls need retention limits, user visibility, administrative policy, audit logs, and deletion paths. Tool use converts the model from a responder into an operational actor. Current frontier systems combine reasoning with tool calls during problem-solving, meaning that the model may observe, decide, act, and revise across multiple steps before reaching a stopping condition. Agents extend that pattern further, decomposing goals and choosing tools, and then inspecting results and autonomously adjusting plans. The security boundary must follow the agent's effective authority: * Which data it can read * Which systems it can modify * Which credentials it can use * Which actions require approval * Which events the SOC can observe Actions need to be classified by consequence. Read-only analysis, draft generation, ticket creation, code changes, cloud modifications, and production operations each carry different risk and require different approval paths. **Related Article** : [How the Latest Frontier AI Models Are Driving the Need for Real-Time Cloud Security](/content/pan/en_us/blog/cloud-security/frontier-ai-models-real-time-cloud-security) ## Why Architecture Matters for Security Frontier AI security starts with architecture because model behavior emerges from the full system. A safe model can still produce unsafe outcomes when it receives poisoned context, retrieves overpermissive data, calls an exposed tool, uses broad credentials, or acts inside a weak approval workflow. Critically, failure tends to emerge between components rather than solely inside the model, which means base-model testing can't surface system-level risk. A model may pass a safety evaluation and still leak data because the retrieval layer ignores document permissions. It may refuse harmful instructions and still trigger an unsafe action because a tool grants excessive authority. It may generate an accurate answer and still violate policy because the orchestration layer routes regulated data to the wrong [endpoint](/content/pan/en_us/cyberpedia/what-is-an-endpoint). Understanding this, attackers bypass the model and target the seams between components. ### Build Around the AI Execution Path The execution path is the right organizing principle because it makes every other control decision coherent. Security teams need to know which user or agent invoked the system, what context entered the model, which data sources retrieval accessed, which tools became available, which identity authorized action, and what changed downstream. * Inventory matters because it identifies AI systems on the execution path. * Identity matters because it defines what the path can reach. * Data controls matter because they govern what enters and leaves context. * Monitoring matters because it reconstructs what happened when the path fails. ### Control Authority at the Boundaries Control points belong where authority changes. The critical boundaries are user to model, model to retrieval, model to tool, tool to enterprise system, and output to downstream workflow. Each represents a trust transition, a point where the AI system either gains access to something new or produces output that affects something outside itself. High-risk boundaries warrant stronger enforcement --- entitlement-aware retrieval, scoped agent credentials, approval gates for consequential actions, isolated execution environments, and telemetry routed to the SOC. The architectural goal is preventing the AI system from accumulating more access, context, or action authority than the specific workflow requires. ### Connect the Control Planes Frontier AI governed in a silo will be ungoverned in practice. The architecture must connect IAM, [data security](/content/pan/en_us/cyberpedia/what-is-data-security), [application security](/content/pan/en_us/cyberpedia/application-security), [cloud security](/content/pan/en_us/cyberpedia/what-is-a-cloud-security), SaaS security, vendor risk, and incident response because an incomplete connection is where real exposure hides. A team that approves a model deployment while missing the vector store it queries has approved half a system. A team that monitors the endpoint while missing the tool call has half the evidence it needs. ## Frontier AI Threat Model A useful [threat model](/content/pan/en_us/cyberpedia/threat-modeling) classifies exposure by the role the AI system plays. The same model can be an asset an attacker targets, a tool an attacker weaponizes, an actor operating inside enterprise workflows, a processor of [sensitive data](/content/pan/en_us/cyberpedia/sensitive-data), and a supply chain dependency. Each role requires different controls, evidence, and response paths. ### Model as Target A frontier model becomes a target when an attacker seeks to steal, alter, clone, or misuse it. Weight theft is the highest-impact scenario. Stolen weights let an adversary replicate capability, bypass provider controls, fine-tune for misuse, or probe safety mechanisms outside monitored infrastructure. Model extraction offers a different path. An attacker repeatedly queries the model and uses the outputs to train a substitute system, exposing commercial capability and revealing decision boundaries without ever touching the weights directly. Unauthorized access broadens the surface further. A compromised account, leaked API key, overpermissive service token, or weak tenant boundary can expose frontier capabilities to users who shouldn't have them. Controls should cover identity, infrastructure, and provenance --- privileged access management, hardened training and inference environments, [secrets protection](/content/pan/en_us/cyberpedia/secrets-management), strong tenant isolation, signed model artifacts, version control, tamper-evident logs, anomaly detection, and provider notification requirements. ### Model as Tool A frontier model becomes a tool when an attacker uses it to accelerate offensive work --- [phishing](/content/pan/en_us/cyberpedia/what-is-phishing), reconnaissance, exploit generation, vulnerability discovery, [malware](/content/pan/en_us/cyberpedia/what-is-malware) development, credential harvesting, and social engineering at scale. Recent evaluations make the trajectory concrete. The UK AI Security Institute reported that [Claude Mythos Preview](https://red.anthropic.com/2026/mythos-preview/) showed significant improvement on multistep cyberattack simulations, and [GPT-5.5 became the second model](https://www.aisi.gov.uk/blog/our-evaluation-of-claude-mythos-previews-cyber-capabilities) to complete one of AISI's multistep simulations end to end. Anthropic has disclosed that nonexperts used Mythos Preview to find and exploit sophisticated vulnerabilities, including remote code execution flaws. Full autonomy isn't required for meaningful adversary uplift. A model can translate a vague target into an actionable plan, covering everything from identifying exposed services and drafting lure variants to adapting public proof-of-concept code and troubleshooting failures along the way. Defensive planning should assume faster adversary iteration and respond by tightening exposure management. Teams need next-generation identity hygiene and [exploitability-aware patching](/content/pan/en_us/cyberpedia/patch-management), in addition to detection engineering for AI-assisted tradecraft and incident playbooks built for automated reconnaissance and high-volume [social engineering](/content/pan/en_us/cyberpedia/what-is-social-engineering). ### Model as Actor A frontier model becomes an actor when it can take steps through delegated access, as in querying SaaS systems, modifying cloud resources, writing and submitting code, updating records, sending messages, or triggering enterprise workflows. Agency converts model error into operational consequence. A bad tool call, for instance, can change production state, expose data, or disable a control. It can create a persistence path that looks like legitimate activity. Governing model-as-actor risk requires tracking effective authority across seven dimensions: 1. Which identities the agent can use 2. Which systems it can read or write 3. Which actions run automatically 4. Which actions require approval 5. Which operations support dry-run mode 6. Which changes the organization can roll back 7. Which logs reach the SOC ### Model as Data Processor A frontier model becomes a data processor when it ingests, transforms, stores, retrieves, or generates information. Often without deliberate disclosure by the user, sensitive data enters AI workflows through prompts, uploaded files, logs, source code, ticket comments, call transcripts, retrieval indexes, training data, memory, and generated outputs. [Retrieval-augmented generation (RAG)](/content/pan/en_us/cyberpedia/what-is-retrieval-augmented-generation) creates a specific risk pattern. The system retrieves documents on the user's behalf and blends them into a response, meaning weak retrieval design can surface material the user couldn't access directly, expose confidential facts through summaries, or inject stale or poisoned content. Embeddings and vector stores compound the problem because they can preserve semantic traces of sensitive content even after the source document is restricted or deleted. Controls must cover the full data path --- entitlement-aware retrieval, prompt and output logging, retention limits, encryption, tenant isolation, [DLP](/content/pan/en_us/cyberpedia/what-is-data-loss-prevention-dlp) for AI channels, secrets detection, memory governance, and embedding-store access control. ### Model as Supply Chain Component A frontier model becomes a supply chain dependency when enterprise workflows rely on external models, fine-tunes, adapters, datasets, plugins, orchestration frameworks, vector databases, evaluation suites, or agent runtimes. It's easy to imagine, then, that a compromised dataset, malicious adapter, poisoned retrieval corpus, or misconfigured model gateway can affect every downstream workflow that trusts it. Version drift adds a quieter risk. A model that passed evaluation in March may behave differently after an April update, and a connector that shipped with read-only access may gain write capability through a routine product release. [AI security](/content/pan/en_us/cyberpedia/ai-security) teams need provenance and change control across model versions, system prompts, policy configurations, retrieved sources, tool manifests, plugin permissions, evaluation results, and approval records. Vendor contracts should address training use, retention, audit logs, model-change notification, breach notification, subprocessors, exportability, and incident cooperation. A disciplined threat model makes frontier AI security measurable --- protect the model as an asset, constrain it as a tool, govern it as an actor, secure it as a data processor, and validate it as a supply chain dependency. **Related Article** : [Frontier AI and the Future of Defense --- Your Top Questions Answered](https://unit42.paloaltonetworks.com/frontier-ai-top-questions-answered/) ## Core Security Challenges Frontier AI security becomes difficult when model capability, enterprise integration, and [governance maturity](/content/pan/en_us/cyberpedia/ai-governance) move at different speeds. The threat model identifies what must be protected. The harder problem is building controls that hold across probabilistic reasoning, sensitive data, delegated access, third-party systems, and live business workflows --- all simultaneously. ### Capability Outpaces Governance Frontier AI rarely enters the enterprise through a single sanctioned platform. It arrives through SaaS copilots, developer assistants, embedded product features, model APIs, agent builders, browser extensions, and shadow workflows built by employees under pressure to move faster than procurement allows. By the time security teams discover a workflow, it may already hold production credentials, process regulated data, and sit outside every logging requirement the organization has. ### Prompt Injection and Instruction Conflict Prompt injection exploits a structural property of frontier models --- the inability to reliably distinguish trusted instruction from untrusted content. An attacker embeds hostile instructions inside a web page, document, email, ticket, image, or retrieved knowledge object and waits for the model to process it. No active session is required. The attack travels with the content. Instruction conflict compounds the exposure. During a single task, a frontier system may receive system instructions, developer instructions, user prompts, retrieved documents, memory entries, tool outputs, and external content --- all inside one context window. The model resolves competing signals without inherent awareness of which sources an attacker controls. ### Excessive Agency Excessive agency is what transforms model error into business impact. A model that only generates text can mislead a user. A model with tool access can modify a record, send a message, submit code, disable a control, open a firewall rule, approve a transaction, or trigger a downstream workflow --- and do so at machine speed, across multiple systems. OWASP identifies excessive autonomy, excessive functionality, and excessive permissions as the common root causes. Each one expands the blast radius of a model failure, which makes the scope of delegated authority the central design question for every agentic workflow. ### Data Exposure Frontier AI creates leakage paths at every stage of the workflow --- prompts, uploads, chat history, retrieved documents, embeddings, vector stores, tool outputs, model memory, fine-tuning data, evaluation sets, telemetry logs, and generated responses. Sensitive data enters AI workflows because a user pastes it, a connector retrieves it, a file parser extracts it, a memory feature stores it, or a retrieval system returns content the user was never entitled to see. The most underappreciated exposure pattern is treating retrieval as search rather than authorization. A vector index can surface material the requesting user couldn't open directly. A model can distill confidential information into a summary that moves it into a lower-trust channel. A telemetry log can retain regulated data long after the originating system would have enforced a retention limit. The exposure in each case isn't a [breach](/content/pan/en_us/cyberpedia/data-breach). It's the system working as designed, with insufficient controls on what it was allowed to reach. ### Evaluation Limits An evaluation reveals weaknesses, which is not equal to certifying durable safety. A benchmark measures a bounded task under defined conditions. A red-team exercise explores selected attack paths at a point in time. Neither evaluation accounts for what happens when providers update models, users alter workflows, connectors gain permissions, and attackers adapt. Evaluation must follow the system into production, contrary to sitting in a launch checklist that no one reviews again. ### Explainability and Auditability Gaps Frontier models generate fluent rationales without exposing the internal causal path behind an output. A model-generated explanation may be coherent and wrong about why the model acted. It may omit the retrieved document that drove a decision, the tool call that changed state, or the policy check that should have blocked an action. Without [explainability](/content/pan/en_us/cyberpedia/ai-explainability) and system-level traceability, generated outputs can circulate as evidence while the actual decision path remains invisible. ### Cyber Capability Diffusion The enterprise consequence of advancing frontier model capability is exposure compression. Vulnerability discovery, exploit reasoning, reconnaissance, scripting, and attack-path planning all accelerate as models improve. Weak patch pipelines, stale assets, exposed management planes, permissive identities, and inadequate logging were always liabilities. Frontier AI raises the speed at which adversaries can find them, chain them, and operationalize them. **Related Article** : [Frontier AI and the Future of Defense --- Your Top Questions Answered](https://unit42.paloaltonetworks.com/frontier-ai-top-questions-answered/) ## Frontier AI Security Controls Frontier AI security controls must make model behavior, data access, tool use, and human approval governable under real operating conditions. The framework combines prevention, detection, response, and governance, allowing teams to reduce exposure before deployment, surface misuse in production, and contain failure when controls break. ### Preventive Controls Preventive controls limit what frontier AI systems can access, ingest, retrieve, generate, and execute before the model receives context or any tool acts on its output. #### Access Control [Access control](/content/pan/en_us/cyberpedia/access-control) starts from a single principle --- no AI identity should hold more access than its specific workflow requires. Users, agents, service accounts, plugins, and connectors all need scoped credentials. Auditable and revocable credentials. Agentic systems make this challenging because they can acquire and exercise access faster than any human reviewer can track. #### Data Minimization Data minimization keeps sensitive material out of model context by default. Regulated data, credentials, proprietary code, and customer records need redaction or tokenization before reaching prompts, retrieval calls, or model memory unless policy explicitly permits exposure. The entry points are numerous enough that passive accumulation is the norm without deliberate controls to prevent it. #### Prompt Hardening Prompt hardening enforces instruction hierarchy so that system instructions, user input, retrieved documents, and tool results are treated as distinct trust tiers. AI gateways and secure orchestration layers can enforce approved system prompts, block unsafe prompt patterns, and prevent untrusted content from overriding privileged instructions. #### Retrieval Permissions Retrieval permissions must be enforced at query time. A retrieval system that checks permissions at index time but not at query time will surface material users were never authorized to see. High-risk workflows should restrict retrieval to approved, signed corpora so external content can't reach the model through the retrieval path. #### Tool Permission Tool permission scoping gives each tool a manifest defining allowed actions, required approvals, and rollback behavior. Code interpreters, browsers, and agent runtimes should run inside constrained environments with no production access unless policy grants it for a specific task. [Sandboxing](/content/pan/en_us/cyberpedia/sandboxing) and egress filtering keep a compromised tool call from becoming a production incident. #### Policy-as-Code [Policy-as-code](/content/pan/en_us/cyberpedia/what-is-policy-as-code) makes AI rules enforceable rather than advisory. Teams should codify allowed models, approved data classes, permitted tools, action thresholds, approval requirements, and logging mandates inside model gateways, orchestration layers, [CI/CD pipelines](/content/pan/en_us/cyberpedia/what-is-the-ci-cd-pipeline-and-ci-cd-security), and agent runtimes. A policy that lives only in a document won't stop an agent with production credentials. ### Detective Controls Detective controls convert AI activity into security telemetry cloud and SOC teams can act on. Visibility must span prompts, completions, retrieved sources, embedding queries, model refusals, tool calls, policy overrides, memory writes, approval events, blocked actions, and agent plans. AI activity logs should feed [SIEM](/content/pan/en_us/cyberpedia/what-is-siem), [SOAR](/content/pan/en_us/cyberpedia/what-is-soar), [XDR](/content/pan/en_us/cyberpedia/what-is-extended-detection-response-XDR), [CNAPP](/content/pan/en_us/cyberpedia/what-is-a-cloud-native-application-protection-platform), [CDR](/content/pan/en_us/cyberpedia/what-is-cloud-detection-and-response-cdr), UEBA, and [data security platforms](/content/pan/en_us/cyberpedia/data-security-platform). Each log record needs user identity, agent identity, model version, system prompt version, retrieved sources, tool-call arguments, policy decisions, approvals, output disposition, and downstream changes. Because logs that capture AI activity inherit the [data classification](/content/pan/en_us/cyberpedia/data-classification) of the content they describe, sensitive log fields require encryption, retention limits, role-based access, and redaction. Anomaly detection should correlate AI activity against identity, cloud, endpoint, SaaS, code repository, API, and [data movement](/content/pan/en_us/cyberpedia/data-movement) telemetry. Patterns worth detecting include unusual prompt volume, abnormal retrieval breadth, repeated access to sensitive indexes, suspicious tool sequences, unexpected memory writes, large output exports, and agent actions that fall outside approved task boundaries. [Prompt injection](/content/pan/en_us/cyberpedia/what-is-a-prompt-injection-attack) detection must cover indirect inputs --- web pages, documents, tickets, emails, code comments, tool results, and retrieved content--- in addition to user prompts. AI gateways and prompt inspection tooling should flag hidden instructions, attempts to override system prompts, [data-exfiltration](/content/pan/en_us/cyberpedia/data-exfiltration) language, and requests to reveal policies or credentials. Tool-call correlation connects model actions to downstream system events. Whether an AI-generated action created a pull request, changed a cloud policy, queried a sensitive database, or modified a customer record, it should be visible through API logs, SaaS audit trails, cloud audit logs, CI/CD records, and XDR. As well, it should link back to the originating prompt and agent identity. Model behavior drift monitoring tracks refusal rates, unsafe output rates, hallucination patterns, retrieval accuracy, tool-call frequency, and jailbreak susceptibility after model or orchestration updates. A provider update that improves general capability may simultaneously weaken refusal behavior or change how the system handles ambiguous instructions. Regression signals should feed both release governance and SOC visibility. ### Responsive Controls Responsive controls contain AI incidents quickly and preserve evidence for investigation. The response plan should assume failure can originate anywhere in the execution path --- model, retrieval layer, tool chain, identity path, provider environment, or human approval process. #### Agent Shutdown Agent shutdown must be enforceable without waiting for engineering. Security teams need the ability to pause an agent, disable a tool, revoke a model route, stop a workflow, or force read-only mode. #### Credential Revocation Credential revocation must cover API keys, OAuth grants, service principals, cloud roles, SaaS tokens, plugin credentials, and agent-issued temporary credentials. Revocation should automatically trigger review of recent tool calls, data access, exports, code commits, ticket changes, and cloud modifications tied to the compromised identity. Because the agent has already acted by the time a credential is flagged (usually), this is key. #### Output Quarantine Output quarantine holds generated content when systems detect prompt injection, unsafe retrieval, data exposure, or tool misuse. Generated code, customer messages, policy documents, incident summaries, and configuration changes should pass through secure release workflows and review gates before reaching downstream systems or external recipients. #### Retrieval Rollback Retrieval rollback requires the ability to remove poisoned or overexposed documents from indexes, rebuild embeddings, invalidate cached retrieval results, restore prior corpus versions, and confirm that query-time authorization now enforces the intended boundary. Remediating a retrieval compromise without validating the authorization fix leaves the same exposure path open. #### Incident Escalation Incident escalation should route AI events through SOAR, case management, privacy workflows, legal workflows, engineering ticketing, and vendor-risk processes. Provider notification belongs in the same playbook --- model behavior anomalies, platform compromises, data retention questions, and logging access may all require vendor action or contractual evidence that the organization can't obtain after the fact. ![Responsibility in agentic AI systems](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-frontier-ai-security/responsibility-in-agentic-ai-systems.webp "Responsibility in agentic AI systems") ***Figure 1**: Responsibility in agentic AI systems* ### Governance Controls Governance controls make frontier AI security repeatable by tying ownership, approvals, testing, audit trails, and vendor obligations to each AI system's risk tier. #### Model Cards and System Cards Model cards and system cards serve as the control record for each deployment, documenting intended use, prohibited use, model and provider dependencies, evaluation results, data boundaries, and residual risk ownership. Risk assessments should examine the full AI workflow: * The data the system can reach * The actions it can take * The dependencies it introduces * The evidence available if something fails #### Approval Controls Approval controls should scale with risk. Low-risk internal assistants may need standard policy review. Customer-facing systems, regulated workflows, code-writing agents, security automation, financial actions, and production-change agents require security architecture review, as well as legal review, deeper testing, and executive risk acceptance where material exposure remains. NIST AI 600-1 provides a generative-AI-specific companion to the [AI Risk Management Framework](/content/pan/en_us/cyberpedia/nist-ai-risk-management-framework), and [MITRE ATLAS](/content/pan/en_us/cyberpedia/mitre-sensible-regulatory-framework-atlas-matrix) organizes adversarial AI techniques into testable scenarios. Both are useful anchors for building evaluation requirements that reflect real-world attack patterns rather than synthetic benchmarks. #### Audit Trails Audit trails must connect AI gateways, model observability, SIEM, SaaS audit logs, cloud logs, source-code systems, ticketing platforms, approval workflows, and GRC records into a defensible chain. A complete record shows who invoked the AI system, which model ran, which instructions applied, which sources were retrieved, which tools executed, which approvals occurred, which outputs were produced, and which downstream actions followed. #### Vendor Commitments Vendor commitments require explicit contractual language covering training use, prompt and output retention, tenant isolation, subprocessors, regional processing, model-change notifications, logging access, breach notification, incident cooperation, red-team evidence, termination support, and data exportability. Ambiguous terms become operational problems during incidents and investigations. #### Board Reporting Board reporting should show whether frontier AI use is visible, governed, and containable. Useful metrics include AI asset coverage, high-risk systems approved, sensitive data exposure events, prompt injection attempts, agent actions by consequence class, blocked tool calls, evaluation failures, unresolved vendor risks, incident readiness, and time to revoke compromised agent credentials. ## Evaluation, Red Teaming, and Assurance Frontier AI testing must run before deployment and continue after release. A model can pass every predeployment benchmark and still fail inside a live enterprise workflow because production adds users, sensitive data, retrieval systems, tools, agents, and approvals, as well as adversarial pressure that synthetic tests don't quite anticipate. ### Predeployment Evaluation Predeployment evaluation should cover the model, the surrounding application, and the connected workflow together. #### Capability Testing Capability testing establishes what the system does under approved conditions --- reasoning, tool selection, retrieval accuracy, refusal behavior --- across scenarios that reflect production data and actual user roles. #### Jailbreak and Prompt Injection Testing Jailbreak and prompt injection testing must pressure indirect inputs as much as direct ones. Documents, web pages, tickets, emails, and retrieved content are higher-risk injection surfaces than direct user prompts because they reach the model through channels users don't control and may not monitor. #### Data Leakage Testing [Data leakage](/content/pan/en_us/cyberpedia/data-leak) testing verifies the system doesn't expose secrets, regulated data, proprietary code, customer records, or content available through another user's permissions. Testing extends across prompts, uploads, retrieved sources, completions, logs, embeddings, memory, and tool outputs. #### Cyber Misuse Evaluation Cyber misuse evaluation assesses whether the system provides meaningful uplift for phishing, exploit generation, vulnerability discovery, or credential theft. MITRE ATLAS organizes adversarial AI techniques into scenarios grounded in real attack patterns rather than hypotheticals. ### Continuous Evaluation One-time approval doesn't carry assurance for systems that change continuously. Model updates, prompt revisions, retrieval changes, and connector updates can all alter behavior in ways predeployment testing never anticipated. A model update that improves general capability may simultaneously weaken refusal behavior or change how the system handles ambiguous instructions. Regression suites should rerun after every material change, covering prior jailbreaks, prompt injection payloads, leakage tests, [retrieval poisoning](/content/pan/en_us/cyberpedia/what-is-data-poisoning) tests, and known production incidents. Failed tests are the most valuable artifacts in the suite, as they define confirmed exposure and anchor future regression coverage. Production feedback loops close the gap between evaluation and reality by routing SOC findings, DLP events, red-team results, user reports, and postincident reviews back into the test suite. The strongest regression signal is a mismatch between what the workflow was designed to do and what the AI system did under pressure. ### AI Red Teaming AI [red teaming](/content/pan/en_us/cyberpedia/what-is-ai-red-teaming) should attack the full system --- prompts, retrieval indexes, memory, tools, agents, connectors, approval processes, downstream workflows, and the human trust paths connecting them. Scoping red team work to the model alone misses where most real attacks land. A mature red team attempts the spectrum of adversarial activities, from manipulating context, extracting data, and poisoning retrieval, to inducing unsafe tool calls, escaping sandboxes, and chaining actions beyond approved authority. OWASP's LLM Top 10 provides a practical framework covering prompt injection, insecure output handling, supply chain vulnerabilities, and tool and permission abuse. Human approval processes deserve dedicated testing. A model can draft a confident justification for a risky action, mislabel a destructive change as routine, or omit the evidence a reviewer needs to push back. Red teams should verify that approvers receive source lineage, tool history, risk classification, and rollback implications (vs a model-generated summary presenting the action favorably). ### Evidence and Assurance Assurance depends on reproducible evidence. Useful records capture which model ran, which system prompt applied, which sources were retrieved, which tools executed, which approvals occurred, and which downstream actions followed. Source-linked outputs let reviewers distinguish evidence from inference. A generated summary earns evidentiary weight only when it identifies the documents, logs, and telemetry behind each claim. Evaluation records should capture test objectives, model version, observed failures, mitigations applied, residual risk, and regression coverage. Residual risk requires explicit ownership. Some systems launch with accepted limitations or narrower access than originally designed, and assurance means leaders know who accepted that risk and which signal would trigger reassessment. ## Governance and Operating Model Frontier AI security needs a standing operating model with defined ownership, risk tiering, decision rights, policy enforcement, exception handling, and board reporting. ### Ownership Model * The CISO owns frontier AI security risk --- control architecture, monitoring, [incident response](/content/pan/en_us/cyberpedia/what-is-incident-response), security testing, third-party AI risk, and agentic misuse. * The CIO owns enterprise AI platform operations --- integration, service management, user enablement, and operational resilience. * The CTO owns AI engineering standards --- model integration patterns, secure SDLC alignment, and production readiness. * Legal and privacy teams define data-use boundaries, retention rules, regulatory obligations, and customer notification requirements. * Procurement and third-party risk evaluate providers, subprocessors, model-change commitments, and audit rights. * Engineering and product teams document intended use, model version, retrieval sources, tool permissions, and residual risk. * Internal audit tests whether approved policies match operational reality, particularly for systems touching customers, regulated data, or production environments. ### Risk Tiers Risk tiering gives the organization a consistent basis for deciding which systems need deeper review rather than making those decisions under deployment pressure. ISO/IEC 42001 provides a management-system approach that makes tiering repeatable across business units rather than discretionary by project. ### Decision Rights Decision rights define what AI may recommend, draft, execute automatically, or route for human approval. Without them, frontier AI embeds into workflows faster than risk owners can distinguish assistance from authority. * Low-risk AI operates in an advisory role --- recommending, summarizing, drafting, classifying. * Medium-risk AI can prepare changes, propose code, and trigger reversible actions when policy permits. * High-risk AI should route to human approval before modifying production systems, sending external communications, changing customer records, approving financial activity, or committing code. Agentic workflows need named approvers by action class --- cloud changes to the service owner, privileged identity changes to the [IAM](/content/pan/en_us/cyberpedia/what-is-identity-and-access-management) owner, customer-impacting communications to legal or support leadership. The operating model should also define who overrides a block and who accepts residual risk, as those decisions happen under pressure and ambiguity claims them if ownership isn't pre-assigned. ### Policy and Exception Management Frontier AI policy should define acceptable use, prohibited data types, approved providers, agent and tool permissions, retrieval boundaries, and evaluation requirements. This should include when enterprise AI gateways are mandatory and which data classes can't enter external models. Exception management requires discipline because deployment pressure will always push against controls. Every exception needs a named owner, expiration date, compensating controls, and monitoring requirement. Exceptions without expiration dates become shadow policy and accumulate until an incident makes them visible. High-risk exceptions --- agents with write access, external model use with sensitive data, provider terms that limit auditability --- require security, privacy, legal, and business review before approval. ## Third-Party AI Risk Frontier AI enters the enterprise through suppliers as often as it enters through internal engineering. Model providers, embedded SaaS AI features, agent builders, orchestration frameworks, and AI-enabled security tools all process enterprise data under terms and architectures the security team didn't design and may not fully understand. ### Provider Due Diligence Standard security questionnaires don't capture enough AI-specific risk. Due diligence needs evidence on the questions that matter most operationally: * Whether customer prompts, outputs, or telemetry can train or improve models * What the provider retains and for how long * Which logs customers can export * How the provider handles model updates that change safety behavior, routing logic, or tool interfaces Training use should be prohibited by default for enterprise data, with any exception requiring written approval for a specific purpose. Retention terms cut in both directions --- short retention weakens investigations while excessive retention expands breach impact. Model updates create change risk that most vendor relationships don't adequately address. Providers should commit to change notifications for material behavior shifts, version pinning where feasible, and customer-controlled rollout for high-risk workflows. ### Embedded AI Features Embedded AI creates the hardest inventory problem because it arrives inside products the enterprise already trusts, often without a new procurement event. A SaaS vendor that adds autonomous ticket routing, code assistance, or agentic workflow execution may change the product's data exposure, permission model, and regulatory profile --- without triggering the review that a new tool would. Developer platforms and security products warrant particular scrutiny. AI coding tools can access proprietary code, generate vulnerable dependencies, and interact with CI/CD systems. AI features in security products may process logs, detections, incident evidence, and vulnerability details --- sensitive material that warrants the same review applied to the security product itself. Agent builders connecting to email, source-code repositories, cloud consoles, or data warehouses deserve the highest review tier, with explicit evaluation of default permissions, credential handling, approval gates, and emergency disablement. ### Contractual Controls AI contracts need explicit language where ambiguity creates operational exposure. Spell out data ownership, training use, prompt and output retention, breach notification triggers, regulator support, model-change notification, audit rights, and exit terms. Breach notification should cover AI-specific events (i.e., unauthorized access to prompts, outputs, retrieval indexes, embeddings), in addition to traditional data breach triggers. Exit rights should ensure the organization can retrieve logs, evaluation records, configuration files, and audit history, and that termination includes deletion certificates from subprocessors. ### Concentration Risk Dependence on a small number of model providers, vector databases, and orchestration frameworks creates risk that compounds quietly. A single provider change can affect pricing, availability, safety behavior, logging access, and contractual terms across many workflows simultaneously. Resilience requires knowing the dependency map before a disruption forces the question. Do you know which business processes depend on which providers and which retrieval systems hold sensitive data? Are you aware of which workflows have no fallback? Critical AI systems should define in advance whether the organization can switch providers, revert to manual processing, and satisfy regulatory obligations during provider disruption. Vendors that control the model, the retrieval layer, and the embedded workflow surface can limit telemetry access and policy enforcement in ways that only become apparent under pressure. Think about that. It makes a strong argument for architectural choices that preserve customer control over data boundaries regardless of which provider sits behind them. ## Metrics for Frontier AI Security Frontier AI metrics should tell you whether the organization can find its AI systems, control what they access, constrain what they do, and contain failure when it happens. The most important distinction is between raw AI adoption and governed AI adoption. A rising inventory count may signal innovation, expanding exposure, or both. Coverage metrics --- the percentage of AI systems that are risk-tiered, owner-assigned, monitored, and carrying approved data boundaries --- show whether security has kept pace with the spread of models, agents, and embedded features across the enterprise. Retrieval authorization deserves its own measurement because it fails quietly. How often retrieval systems return content outside the requester's entitlement, and how many vector indexes enforce permissions at query time rather than only at index time, are more operationally meaningful than aggregate data loss prevention counts. For agentic systems, approval metrics should distinguish speed from control quality. A fast approval path is a liability if reviewers lack source lineage, tool-call history, and rollback context. The right metric is whether approvers received the evidence needed to make a defensible decision. On the response side, speed matters less than evidence preservation. An incident that closes quickly but leaves no record of which prompts ran, which tools executed, and which downstream systems changed makes recurrence more likely. After any prompt injection attempt, data leakage event, or agent misuse case, the closing validation should answer whether the same failure path remains open through another model, connector, or workflow. ## Frontier AI Security FAQs ### What is a foundation model? A foundation model is a large, general-purpose AI model trained on vast datasets and designed to be adapted across many downstream tasks. Rather than building task-specific models from scratch, organizations fine-tune or prompt these models for use cases such as text generation, code completion, or image analysis. Their flexibility makes them foundational, yet also expands the attack surface when reused across applications. ### What are model weights? In AI, model weights are the numerical values that determine the strength of connections between neurons in a neural network. Think of them as the learned knowledge or memory of the system. During training, these values are adjusted so the model can recognize patterns and make predictions. Higher weights indicate stronger influences on the final output. ### What is model weight exfiltration? Model weight exfiltration refers to the unauthorized extraction of a model's trained parameters. These weights represent the learned intelligence of the model and often embody proprietary value. If stolen, they can enable replication of the model, competitive misuse, or further attacks such as reverse engineering and vulnerability discovery. ### What is retrieval poisoning? Retrieval poisoning is an attack in which an adversary manipulates the data sources that an AI system retrieves during operation, such as vector databases or indexed documents. By inserting malicious or misleading content into these sources, attackers can influence model outputs, cause incorrect decisions, or trigger unsafe behavior without directly modifying the model itself. ### What is model extraction? Model extraction is an attack technique in which an adversary reconstructs a target model by systematically querying it and analyzing its responses. Over time, the attacker builds a functional approximation of the original model without direct access to its internal parameters. This can lead to intellectual property theft and reduced competitive advantage. ### What is membership inference? Membership inference is a privacy attack that determines whether a specific data point was part of a model's training dataset. By analyzing how confidently a model responds to certain inputs, attackers can infer the presence of sensitive or proprietary data, potentially exposing confidential information. ### What is model inversion? Model inversion is an attack that attempts to reconstruct sensitive input data from a model's outputs. For example, an attacker may infer personal information or training data characteristics by probing the model. The risk is especially high when models are trained on sensitive datasets such as medical or financial records. ### What is AI provenance? AI provenance refers to the traceability of all components involved in an AI system, including models, datasets, prompts, tools, and outputs. Strong provenance supports auditability, compliance, and trust in AI-driven systems. ### What is an AI sandbox escape? An AI sandbox escape occurs when a model or agent breaks out of its restricted execution environment and interacts with unauthorized systems or data. Sandboxes are designed to isolate AI behavior, but vulnerabilities or misconfigurations can allow attackers to bypass these controls, leading to broader system compromise. ### What is tool-call governance? Tool-call governance defines the policies and controls that regulate how an AI system interacts with external tools, APIs, and services. It ensures that each tool invocation is authorized, constrained, and auditable. Proper governance prevents misuse, limits the scope of actions, and reduces the risk of unintended or malicious operations. ### What is entitlement-aware retrieval? Entitlement-aware retrieval ensures that an AI system only retrieves data that the requesting user or agent is authorized to access at query time. It enforces access control dynamically, rather than relying solely on static indexing rules. This prevents unauthorized data exposure during retrieval-based workflows. ### What is AI runtime monitoring? AI runtime monitoring involves continuously observing an AI system's behavior during operation. It tracks prompts, outputs, tool usage, data access, and decision patterns to detect anomalies, misuse, or policy violations. Effective runtime monitoring provides the visibility needed to identify threats and respond before they escalate. Related Content [State of Cloud Security Report Get the full picture of how cloud risk is evolving. Download the State of Cloud Security Report 2025 to benchmark your strategy and act on what matters most.](https://www.paloaltonetworks.com/state-of-cloud-native-security?ts=markdown) [Frontier AI and the Future of Defense: Your Top Questions Answered What are the next steps for security leaders in this new age of frontier AI? We answer the top 10 questions customers are asking.](https://unit42.paloaltonetworks.com/frontier-ai-top-questions-answered/) [The AI Ecosystem Edge --- Introducing Our Frontier AI Alliance Frontier AI accelerates cyberattacks. Learn how Palo Alto Networks and the Frontier AI Alliance deliver an industry-standard, unified defense for enterprise AI resilience.](https://www.paloaltonetworks.com/blog/2026/04/ai-ecosystem-edge-introducing-frontier-ai-alliance/?ts=markdown) [Defender's Guide to Frontier AI: A Checklist for CISOs Advanced AI models with deep adversarial capabilities will soon become commonplace. Learn how to close your security gaps in this phased approach.](https://www.paloaltonetworks.com/resources/datasheets/defenders-guide-to-frontier-ai-checklist-for-cisos?ts=markdown) ![Share page on facebook](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/resources/facebook-circular-icon.svg) ![Share page on linkedin](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/resources/linkedin-circular-icon.svg) [![Share page by an email](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/resources/email-circular-icon.svg)](mailto:?subject=What%20Is%20Frontier%20AI%20Security%3F&body=Frontier%20AI%20security%20explained%20in%20this%20comprehensive%20guide%20covering%20how%20advanced%20models%20work%2C%20where%20they%20create%20value%2C%20and%20which%20risks%20security%20teams%20must%20control.%20at%20https%3A//www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai-security) Back to Top [Previous](https://www.paloaltonetworks.com/cyberpedia/frontier-ai-security-implementation-roadmap?ts=markdown) Frontier Security Implementation Roadmap [Next](https://www.paloaltonetworks.com/cyberpedia/what-is-frontier-ai?ts=markdown) What Is Frontier AI? {#footer} Products and Services * [AI-Powered Network Security Platform](https://www.paloaltonetworks.com/network-security?ts=markdown) * [Secure AI by Design](https://www.paloaltonetworks.com/ai-security?ts=markdown) * [Prisma AIRS](https://www.paloaltonetworks.com/prisma/prisma-ai-runtime-security?ts=markdown) * [AI Access Security](https://www.paloaltonetworks.com/sase/ai-access-security?ts=markdown) * [Cloud Delivered Security Services](https://www.paloaltonetworks.com/network-security/security-subscriptions?ts=markdown) * [Advanced Threat Prevention](https://www.paloaltonetworks.com/network-security/advanced-threat-prevention?ts=markdown) * [Advanced URL Filtering](https://www.paloaltonetworks.com/network-security/advanced-url-filtering?ts=markdown) * [Advanced WildFire](https://www.paloaltonetworks.com/network-security/advanced-wildfire?ts=markdown) * [Advanced DNS Security](https://www.paloaltonetworks.com/network-security/advanced-dns-security?ts=markdown) * [Enterprise Data Loss Prevention](https://www.paloaltonetworks.com/sase/enterprise-data-loss-prevention?ts=markdown) * [Enterprise IoT Security](https://www.paloaltonetworks.com/network-security/enterprise-device-security?ts=markdown) * [Medical IoT Security](https://www.paloaltonetworks.com/network-security/medical-device-security?ts=markdown) * [Industrial OT Security](https://www.paloaltonetworks.com/network-security/medical-device-security?ts=markdown) * [SaaS Security](https://www.paloaltonetworks.com/sase/saas-security?ts=markdown) * [Next-Generation Firewalls](https://www.paloaltonetworks.com/network-security/next-generation-firewall?ts=markdown) * [Hardware Firewalls](https://www.paloaltonetworks.com/network-security/hardware-firewall-innovations?ts=markdown) * [Software Firewalls](https://www.paloaltonetworks.com/network-security/software-firewalls?ts=markdown) * [Strata Cloud Manager](https://www.paloaltonetworks.com/network-security/strata-cloud-manager?ts=markdown) * [SD-WAN for NGFW](https://www.paloaltonetworks.com/network-security/sd-wan-subscription?ts=markdown) * [PAN-OS](https://www.paloaltonetworks.com/network-security/pan-os?ts=markdown) * [Panorama](https://www.paloaltonetworks.com/network-security/panorama?ts=markdown) * [Secure Access Service Edge](https://www.paloaltonetworks.com/sase?ts=markdown) * [Prisma SASE](https://www.paloaltonetworks.com/sase?ts=markdown) * [Application Acceleration](https://www.paloaltonetworks.com/sase/app-acceleration?ts=markdown) * [Autonomous Digital Experience Management](https://www.paloaltonetworks.com/sase/adem?ts=markdown) * [Enterprise DLP](https://www.paloaltonetworks.com/sase/enterprise-data-loss-prevention?ts=markdown) * [Prisma Access](https://www.paloaltonetworks.com/sase/access?ts=markdown) * [Prisma Browser](https://www.paloaltonetworks.com/sase/prisma-browser?ts=markdown) * [Prisma SD-WAN](https://www.paloaltonetworks.com/sase/sd-wan?ts=markdown) * [Remote Browser Isolation](https://www.paloaltonetworks.com/sase/remote-browser-isolation?ts=markdown) * [SaaS Security](https://www.paloaltonetworks.com/sase/saas-security?ts=markdown) * [AI-Driven Security Operations Platform](https://www.paloaltonetworks.com/cortex?ts=markdown) * [Cloud Security](https://www.paloaltonetworks.com/cortex/cloud?ts=markdown) * [Cortex Cloud](https://www.paloaltonetworks.com/cortex/cloud?ts=markdown) * [Application Security](https://www.paloaltonetworks.com/cortex/cloud/application-security?ts=markdown) * [Cloud Posture Security](https://www.paloaltonetworks.com/cortex/cloud/cloud-posture-security?ts=markdown) * [Cloud Runtime Security](https://www.paloaltonetworks.com/cortex/cloud/runtime-security?ts=markdown) * [Prisma Cloud](https://www.paloaltonetworks.com/prisma/cloud?ts=markdown) * [AI-Driven SOC](https://www.paloaltonetworks.com/cortex?ts=markdown) * [Cortex XSIAM](https://www.paloaltonetworks.com/cortex/cortex-xsiam?ts=markdown) * [Cortex XDR](https://www.paloaltonetworks.com/cortex/cortex-xdr?ts=markdown) * [Cortex XSOAR](https://www.paloaltonetworks.com/cortex/cortex-xsoar?ts=markdown) * [Cortex Xpanse](https://www.paloaltonetworks.com/cortex/cortex-xpanse?ts=markdown) * [Unit 42 Managed Detection \& Response](https://www.paloaltonetworks.com/cortex/managed-detection-and-response?ts=markdown) * [Managed XSIAM](https://www.paloaltonetworks.com/cortex/managed-xsiam?ts=markdown) * [Next-Generation Identity Security](https://www.paloaltonetworks.com/idira?ts=markdown) * [Privileged Access Management](https://www.paloaltonetworks.com/idira/human/privileged-access-management?ts=markdown) * [Identity and Access Management](https://www.paloaltonetworks.com/idira/human/identity-and-access-management?ts=markdown) * [Endpoint Privilege Manager](https://www.paloaltonetworks.com/idira/human/endpoint-privilege-manager?ts=markdown) * [Identity Governance](https://www.paloaltonetworks.com/idira/human/identity-governance?ts=markdown) * [Workforce Password Management](https://www.paloaltonetworks.com/idira/human/workforce-password-management?ts=markdown) * [Agentic Identities](https://www.paloaltonetworks.com/idira/agentic?ts=markdown) * [Secrets Management](https://www.paloaltonetworks.com/idira/machine/secrets-management?ts=markdown) * [Unified Secrets Governance](https://www.paloaltonetworks.com/idira/machine/unified-secrets-governance?ts=markdown) * [Application Credentials Delivery](https://www.paloaltonetworks.com/idira/machine/application-credentials-delivery?ts=markdown) * [Vendor Privileged Access](https://www.paloaltonetworks.com/idira/human/vendor-privileged-access?ts=markdown) * [Threat Intel and Incident Response Services](https://www.paloaltonetworks.com/unit42?ts=markdown) * [Proactive Assessments](https://www.paloaltonetworks.com/unit42/assess?ts=markdown) * [Incident Response](https://www.paloaltonetworks.com/unit42/respond?ts=markdown) * [Transform Your Security Strategy](https://www.paloaltonetworks.com/unit42/transform?ts=markdown) * [Discover Threat Intelligence](https://www.paloaltonetworks.com/unit42/threat-intelligence-partners?ts=markdown) Company * [About Us](https://www.paloaltonetworks.com/about-us?ts=markdown) * [Careers](https://jobs.paloaltonetworks.com/en/) * [Contact Us](https://www.paloaltonetworks.com/company/contact-sales?ts=markdown) * [Corporate Responsibility](https://www.paloaltonetworks.com/about-us/corporate-responsibility?ts=markdown) * [Customers](https://www.paloaltonetworks.com/customers?ts=markdown) * [Investor Relations](https://investors.paloaltonetworks.com/) * [Location](https://www.paloaltonetworks.com/about-us/locations?ts=markdown) * [Newsroom](https://www.paloaltonetworks.com/company/newsroom?ts=markdown) Popular Links * [Blog](https://www.paloaltonetworks.com/blog/?ts=markdown) * [Communities](https://www.paloaltonetworks.com/communities?ts=markdown) * [Content Library](https://www.paloaltonetworks.com/resources?ts=markdown) * [Cyberpedia](https://www.paloaltonetworks.com/cyberpedia?ts=markdown) * [Event Center](https://events.paloaltonetworks.com/) * [Manage Email Preferences](https://start.paloaltonetworks.com/preference-center) * [Products A-Z](https://www.paloaltonetworks.com/products/products-a-z?ts=markdown) * [Product Certifications](https://www.paloaltonetworks.com/legal-notices/trust-center/compliance?ts=markdown) * [Report a Vulnerability](https://www.paloaltonetworks.com/security-disclosure?ts=markdown) * [Sitemap](https://www.paloaltonetworks.com/sitemap?ts=markdown) * [Tech Docs](https://docs.paloaltonetworks.com/) * [Unit 42](https://unit42.paloaltonetworks.com/) * [Do Not Sell or Share My Personal Information](https://panwedd.exterro.net/portal/dsar.htm?target=panwedd) ![PAN logo](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/pan-logo-dark.svg) * [Privacy](https://www.paloaltonetworks.com/legal-notices/privacy?ts=markdown) * [Trust Center](https://www.paloaltonetworks.com/legal-notices/trust-center?ts=markdown) * [Terms of Use](https://www.paloaltonetworks.com/legal-notices/terms-of-use?ts=markdown) * [Documents](https://www.paloaltonetworks.com/legal?ts=markdown) Copyright © 2026 Palo Alto Networks. All Rights Reserved * [![Youtube](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/social/youtube-black.svg)](https://www.youtube.com/user/paloaltonetworks) * [![Podcast](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/icons/podcast.svg)](https://www.paloaltonetworks.com/podcasts/threat-vector?ts=markdown) * [![Facebook](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/social/facebook-black.svg)](https://www.facebook.com/PaloAltoNetworks/) * [![LinkedIn](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/social/linkedin-black.svg)](https://www.linkedin.com/company/palo-alto-networks) * [![Twitter](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/social/twitter-x-black.svg)](https://twitter.com/PaloAltoNtwks) * EN Select your language