[](https://www.paloaltonetworks.com/?ts=markdown) * Sign In * Customer * Partner * Employee * [Login to download](https://www.paloaltonetworks.com/login?ts=markdown) * [Join us to become a member](https://www.paloaltonetworks.com/login?screenToRender=traditionalRegistration&ts=markdown) * EN * [USA (ENGLISH)](https://www.paloaltonetworks.com) * [AUSTRALIA (ENGLISH)](https://www.paloaltonetworks.com.au) * [BRAZIL (PORTUGUÉS)](https://www.paloaltonetworks.com.br) * [CANADA (ENGLISH)](https://www.paloaltonetworks.ca) * [CHINA (简体中文)](https://www.paloaltonetworks.cn) * [FRANCE (FRANÇAIS)](https://www.paloaltonetworks.fr) * [GERMANY (DEUTSCH)](https://www.paloaltonetworks.de) * [INDIA (ENGLISH)](https://www.paloaltonetworks.in) * [ITALY (ITALIANO)](https://www.paloaltonetworks.it) * [JAPAN (日本語)](https://www.paloaltonetworks.jp) * [KOREA (한국어)](https://www.paloaltonetworks.co.kr) * [LATIN AMERICA (ESPAÑOL)](https://www.paloaltonetworks.lat) * [MEXICO (ESPAÑOL)](https://www.paloaltonetworks.com.mx) * [SINGAPORE (ENGLISH)](https://www.paloaltonetworks.sg) * [SPAIN (ESPAÑOL)](https://www.paloaltonetworks.es) * [TAIWAN (繁體中文)](https://www.paloaltonetworks.tw) * [UK (ENGLISH)](https://www.paloaltonetworks.co.uk) * ![magnifying glass search icon to open search field](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/search-black.svg) * [Contact Us](https://www.paloaltonetworks.com/company/contact-sales?ts=markdown) * [What's New](https://www.paloaltonetworks.com/resources?ts=markdown) * [Get Support](https://support.paloaltonetworks.com/SupportAccount/MyAccount) * [Under Attack?](https://start.paloaltonetworks.com/contact-unit42.html) ![x close icon to close mobile navigation](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/x-black.svg) [![Palo Alto Networks logo](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/pan-logo-dark.svg)](https://www.paloaltonetworks.com/?ts=markdown) ![magnifying glass search icon to open search field](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/search-black.svg) * [](https://www.paloaltonetworks.com/?ts=markdown) * Products ![black arrow pointing left to go back to main navigation](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/arrow-right-black.svg) Products [AI-Powered Network Security Platform](https://www.paloaltonetworks.com/network-security?ts=markdown) * [AI Security](https://www.paloaltonetworks.com/precision-ai-security/secure-ai-by-design?ts=markdown) * [Prisma AIRS](https://www.paloaltonetworks.com/prisma/prisma-ai-runtime-security?ts=markdown) * [AI Access Security](https://www.paloaltonetworks.com/sase/ai-access-security?ts=markdown) * [Cloud Delivered Security Services](https://www.paloaltonetworks.com/network-security/security-subscriptions?ts=markdown) * [Advanced Threat Prevention](https://www.paloaltonetworks.com/network-security/advanced-threat-prevention?ts=markdown) * [Advanced URL Filtering](https://www.paloaltonetworks.com/network-security/advanced-url-filtering?ts=markdown) * [Advanced WildFire](https://www.paloaltonetworks.com/network-security/advanced-wildfire?ts=markdown) * [Advanced DNS Security](https://www.paloaltonetworks.com/network-security/advanced-dns-security?ts=markdown) * [Enterprise Data Loss Prevention](https://www.paloaltonetworks.com/sase/enterprise-data-loss-prevention?ts=markdown) * [Enterprise Device Security](https://www.paloaltonetworks.com/network-security/enterprise-device-security?ts=markdown) * [Medical Device Security](https://www.paloaltonetworks.com/network-security/medical-device-security?ts=markdown) * [OT Device Security](https://www.paloaltonetworks.com/network-security/ot-device-security?ts=markdown) * [SaaS Security](https://www.paloaltonetworks.com/sase/saas-security?ts=markdown) * [Next-Generation Firewalls](https://www.paloaltonetworks.com/network-security/next-generation-firewall?ts=markdown) * [Hardware Firewalls](https://www.paloaltonetworks.com/network-security/hardware-firewall-innovations?ts=markdown) * [Software Firewalls](https://www.paloaltonetworks.com/network-security/software-firewalls?ts=markdown) * [Strata Cloud Manager](https://www.paloaltonetworks.com/network-security/strata-cloud-manager?ts=markdown) * [SD-WAN for NGFW](https://www.paloaltonetworks.com/network-security/sd-wan-subscription?ts=markdown) * [PAN-OS](https://www.paloaltonetworks.com/network-security/pan-os?ts=markdown) * [Panorama](https://www.paloaltonetworks.com/network-security/panorama?ts=markdown) * [Secure Access Service Edge](https://www.paloaltonetworks.com/sase?ts=markdown) * [Prisma SASE](https://www.paloaltonetworks.com/sase?ts=markdown) * [Application Acceleration](https://www.paloaltonetworks.com/sase/app-acceleration?ts=markdown) * [Autonomous Digital Experience Management](https://www.paloaltonetworks.com/sase/adem?ts=markdown) * [Enterprise DLP](https://www.paloaltonetworks.com/sase/enterprise-data-loss-prevention?ts=markdown) * [Prisma Access](https://www.paloaltonetworks.com/sase/access?ts=markdown) * [Prisma Browser](https://www.paloaltonetworks.com/sase/prisma-browser?ts=markdown) * [Prisma SD-WAN](https://www.paloaltonetworks.com/sase/sd-wan?ts=markdown) * [Remote Browser Isolation](https://www.paloaltonetworks.com/sase/remote-browser-isolation?ts=markdown) * [SaaS Security](https://www.paloaltonetworks.com/sase/saas-security?ts=markdown) [AI-Driven Security Operations Platform](https://www.paloaltonetworks.com/cortex?ts=markdown) * [Cloud Security](https://www.paloaltonetworks.com/cortex/cloud?ts=markdown) * [Cortex Cloud](https://www.paloaltonetworks.com/cortex/cloud?ts=markdown) * [Application Security](https://www.paloaltonetworks.com/cortex/cloud/application-security?ts=markdown) * [Cloud Posture Security](https://www.paloaltonetworks.com/cortex/cloud/cloud-posture-security?ts=markdown) * [Cloud Runtime Security](https://www.paloaltonetworks.com/cortex/cloud/runtime-security?ts=markdown) * [Prisma Cloud](https://www.paloaltonetworks.com/prisma/cloud?ts=markdown) * [AI-Driven SOC](https://www.paloaltonetworks.com/cortex?ts=markdown) * [Cortex XSIAM](https://www.paloaltonetworks.com/cortex/cortex-xsiam?ts=markdown) * [Cortex XDR](https://www.paloaltonetworks.com/cortex/cortex-xdr?ts=markdown) * [Cortex AgentiX](https://www.paloaltonetworks.com/cortex/agentix?ts=markdown) * [Cortex XSOAR](https://www.paloaltonetworks.com/cortex/cortex-xsoar?ts=markdown) * [Cortex Exposure Management](https://www.paloaltonetworks.com/cortex/exposure-management?ts=markdown) * [Cortex Xpanse](https://www.paloaltonetworks.com/cortex/cortex-xpanse?ts=markdown) * [Cortex Advanced Email Security](https://www.paloaltonetworks.com/cortex/advanced-email-security?ts=markdown) * [Unit 42 Managed Detection \& Response](https://www.paloaltonetworks.com/cortex/managed-detection-and-response?ts=markdown) * [Unit 42 Managed XSIAM](https://www.paloaltonetworks.com/cortex/managed-xsiam?ts=markdown) * Solutions ![black arrow pointing left to go back to main navigation](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/arrow-right-black.svg) Solutions Secure AI by Design * [Secure AI Ecosystem](https://www.paloaltonetworks.com/prisma/prisma-ai-runtime-security?ts=markdown) * [Secure GenAI Usage](https://www.paloaltonetworks.com/sase/ai-access-security?ts=markdown) Network Security * [Cloud Network Security](https://www.paloaltonetworks.com/network-security/software-firewalls?ts=markdown) * [Data Center Security](https://www.paloaltonetworks.com/network-security/data-center?ts=markdown) * [DNS Security](https://www.paloaltonetworks.com/network-security/advanced-dns-security?ts=markdown) * [Intrusion Detection and Prevention](https://www.paloaltonetworks.com/network-security/advanced-threat-prevention?ts=markdown) * [Device Security](https://www.paloaltonetworks.com/network-security/device-security?ts=markdown) * [OT Security](https://www.paloaltonetworks.com/network-security/ot-security-solution?ts=markdown) * [5G Security](https://www.paloaltonetworks.com/network-security/5g-security?ts=markdown) * [Secure All Apps, Users and Locations](https://www.paloaltonetworks.com/sase/secure-users-data-apps-devices?ts=markdown) * [Secure Branch Transformation](https://www.paloaltonetworks.com/sase/secure-branch-transformation?ts=markdown) * [Secure Work on Any Device](https://www.paloaltonetworks.com/sase/secure-work-on-any-device?ts=markdown) * [VPN Replacement](https://www.paloaltonetworks.com/sase/vpn-replacement-for-secure-remote-access?ts=markdown) * [Web \& Phishing Security](https://www.paloaltonetworks.com/network-security/advanced-url-filtering?ts=markdown) Cloud Security * [Application Security Posture Management (ASPM)](https://www.paloaltonetworks.com/cortex/cloud/application-security-posture-management?ts=markdown) * [Software Supply Chain Security](https://www.paloaltonetworks.com/cortex/cloud/software-supply-chain-security?ts=markdown) * [Code Security](https://www.paloaltonetworks.com/cortex/cloud/code-security?ts=markdown) * [Cloud Security Posture Management (CSPM)](https://www.paloaltonetworks.com/cortex/cloud/cloud-security-posture-management?ts=markdown) * [Cloud Infrastructure Entitlement Management (CIEM)](https://www.paloaltonetworks.com/cortex/cloud/cloud-infrastructure-entitlement-management?ts=markdown) * [Data Security Posture Management (DSPM)](https://www.paloaltonetworks.com/cortex/cloud/data-security-posture-management?ts=markdown) * [AI Security Posture Management (AI-SPM)](https://www.paloaltonetworks.com/cortex/cloud/ai-security-posture-management?ts=markdown) * [Cloud Detection \& Response](https://www.paloaltonetworks.com/cortex/cloud-detection-and-response?ts=markdown) * [Cloud Workload Protection (CWP)](https://www.paloaltonetworks.com/cortex/cloud/cloud-workload-protection?ts=markdown) * [Web Application \& API Security (WAAS)](https://www.paloaltonetworks.com/cortex/cloud/web-app-api-security?ts=markdown) Security Operations * [Cloud Detection \& Response](https://www.paloaltonetworks.com/cortex/cloud-detection-and-response?ts=markdown) * [Security Information and Event Management](https://www.paloaltonetworks.com/cortex/modernize-siem?ts=markdown) * [Network Security Automation](https://www.paloaltonetworks.com/cortex/network-security-automation?ts=markdown) * [Incident Case Management](https://www.paloaltonetworks.com/cortex/incident-case-management?ts=markdown) * [SOC Automation](https://www.paloaltonetworks.com/cortex/security-operations-automation?ts=markdown) * [Threat Intel Management](https://www.paloaltonetworks.com/cortex/threat-intel-management?ts=markdown) * [Managed Detection \& Response](https://www.paloaltonetworks.com/cortex/managed-detection-and-response?ts=markdown) * [Attack Surface Management](https://www.paloaltonetworks.com/cortex/cortex-xpanse/attack-surface-management?ts=markdown) * [Compliance Management](https://www.paloaltonetworks.com/cortex/cortex-xpanse/compliance-management?ts=markdown) * [Internet Operations Management](https://www.paloaltonetworks.com/cortex/cortex-xpanse/internet-operations-management?ts=markdown) * [Extended Data Lake (XDL)](https://www.paloaltonetworks.com/cortex/cortex-xdl?ts=markdown) * [Agentic Assistant](https://www.paloaltonetworks.com/cortex/cortex-agentic-assistant?ts=markdown) Endpoint Security * [Endpoint Protection](https://www.paloaltonetworks.com/cortex/endpoint-protection?ts=markdown) * [Extended Detection \& Response](https://www.paloaltonetworks.com/cortex/detection-and-response?ts=markdown) * [Ransomware Protection](https://www.paloaltonetworks.com/cortex/ransomware-protection?ts=markdown) * [Digital Forensics](https://www.paloaltonetworks.com/cortex/digital-forensics?ts=markdown) [Industries](https://www.paloaltonetworks.com/industry?ts=markdown) * [Public Sector](https://www.paloaltonetworks.com/industry/public-sector?ts=markdown) * [Financial Services](https://www.paloaltonetworks.com/industry/financial-services?ts=markdown) * [Manufacturing](https://www.paloaltonetworks.com/industry/manufacturing?ts=markdown) * [Healthcare](https://www.paloaltonetworks.com/industry/healthcare?ts=markdown) * [Small \& Medium Business Solutions](https://www.paloaltonetworks.com/industry/small-medium-business-portfolio?ts=markdown) * Services ![black arrow pointing left to go back to main navigation](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/arrow-right-black.svg) Services [Threat Intel and Incident Response Services](https://www.paloaltonetworks.com/unit42?ts=markdown) * [Assess](https://www.paloaltonetworks.com/unit42/assess?ts=markdown) * [AI Security Assessment](https://www.paloaltonetworks.com/unit42/assess/ai-security-assessment?ts=markdown) * [Attack Surface Assessment](https://www.paloaltonetworks.com/unit42/assess/attack-surface-assessment?ts=markdown) * [Breach Readiness Review](https://www.paloaltonetworks.com/unit42/assess/breach-readiness-review?ts=markdown) * [BEC Readiness Assessment](https://www.paloaltonetworks.com/bec-readiness-assessment?ts=markdown) * [Cloud Security Assessment](https://www.paloaltonetworks.com/unit42/assess/cloud-security-assessment?ts=markdown) * [Compromise Assessment](https://www.paloaltonetworks.com/unit42/assess/compromise-assessment?ts=markdown) * [Cyber Risk Assessment](https://www.paloaltonetworks.com/unit42/assess/cyber-risk-assessment?ts=markdown) * [M\&A Cyber Due Diligence](https://www.paloaltonetworks.com/unit42/assess/mergers-acquisitions-cyber-due-diligence?ts=markdown) * [Penetration Testing](https://www.paloaltonetworks.com/unit42/assess/penetration-testing?ts=markdown) * [Purple Team Exercises](https://www.paloaltonetworks.com/unit42/assess/purple-teaming?ts=markdown) * [Ransomware Readiness Assessment](https://www.paloaltonetworks.com/unit42/assess/ransomware-readiness-assessment?ts=markdown) * [SOC Assessment](https://www.paloaltonetworks.com/unit42/assess/soc-assessment?ts=markdown) * [Supply Chain Risk Assessment](https://www.paloaltonetworks.com/unit42/assess/supply-chain-risk-assessment?ts=markdown) * [Tabletop Exercises](https://www.paloaltonetworks.com/unit42/assess/tabletop-exercise?ts=markdown) * [Unit 42 Retainer](https://www.paloaltonetworks.com/unit42/retainer?ts=markdown) * [Respond](https://www.paloaltonetworks.com/unit42/respond?ts=markdown) * [Cloud Incident Response](https://www.paloaltonetworks.com/unit42/respond/cloud-incident-response?ts=markdown) * [Digital Forensics](https://www.paloaltonetworks.com/unit42/respond/digital-forensics?ts=markdown) * [Incident Response](https://www.paloaltonetworks.com/unit42/respond/incident-response?ts=markdown) * [Managed Detection and Response](https://www.paloaltonetworks.com/unit42/respond/managed-detection-response?ts=markdown) * [Managed Threat Hunting](https://www.paloaltonetworks.com/unit42/respond/managed-threat-hunting?ts=markdown) * [Managed XSIAM](https://www.paloaltonetworks.com/cortex/managed-xsiam?ts=markdown) * [Unit 42 Retainer](https://www.paloaltonetworks.com/unit42/retainer?ts=markdown) * [Transform](https://www.paloaltonetworks.com/unit42/transform?ts=markdown) * [IR Plan Development and Review](https://www.paloaltonetworks.com/unit42/transform/incident-response-plan-development-review?ts=markdown) * [Security Program Design](https://www.paloaltonetworks.com/unit42/transform/security-program-design?ts=markdown) * [Virtual CISO](https://www.paloaltonetworks.com/unit42/transform/vciso?ts=markdown) * [Zero Trust Advisory](https://www.paloaltonetworks.com/unit42/transform/zero-trust-advisory?ts=markdown) [Global Customer Services](https://www.paloaltonetworks.com/services?ts=markdown) * [Education \& Training](https://www.paloaltonetworks.com/services/education?ts=markdown) * [Professional Services](https://www.paloaltonetworks.com/services/consulting?ts=markdown) * [Success Tools](https://www.paloaltonetworks.com/services/customer-success-tools?ts=markdown) * [Support Services](https://www.paloaltonetworks.com/services/solution-assurance?ts=markdown) * [Customer Success](https://www.paloaltonetworks.com/services/customer-success?ts=markdown) [![](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/logo-unit-42.svg) UNIT 42 RETAINER Custom-built to fit your organization's needs, you can choose to allocate your retainer hours to any of our offerings, including proactive cyber risk management services. Learn how you can put the world-class Unit 42 Incident Response team on speed dial. Learn more](https://www.paloaltonetworks.com/unit42/retainer?ts=markdown) * Partners ![black arrow pointing left to go back to main navigation](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/arrow-right-black.svg) Partners NextWave Partners * [NextWave Partner Community](https://www.paloaltonetworks.com/partners?ts=markdown) * [Cloud Service Providers](https://www.paloaltonetworks.com/partners/nextwave-for-csp?ts=markdown) * [Global Systems Integrators](https://www.paloaltonetworks.com/partners/nextwave-for-gsi?ts=markdown) * [Technology Partners](https://www.paloaltonetworks.com/partners/technology-partners?ts=markdown) * [Service Providers](https://www.paloaltonetworks.com/partners/service-providers?ts=markdown) * [Solution Providers](https://www.paloaltonetworks.com/partners/nextwave-solution-providers?ts=markdown) * [Managed Security Service Providers](https://www.paloaltonetworks.com/partners/managed-security-service-providers?ts=markdown) * [XMDR Partners](https://www.paloaltonetworks.com/partners/managed-security-service-providers/xmdr?ts=markdown) Take Action * [Portal Login](https://www.paloaltonetworks.com/partners/nextwave-partner-portal?ts=markdown) * [Managed Services Program](https://www.paloaltonetworks.com/partners/managed-security-services-provider-program?ts=markdown) * [Become a Partner](https://paloaltonetworks.my.site.com/NextWavePartnerProgram/s/partnerregistration?type=becomepartner) * [Request Access](https://paloaltonetworks.my.site.com/NextWavePartnerProgram/s/partnerregistration?type=requestaccess) * [Find a Partner](https://paloaltonetworks.my.site.com/NextWavePartnerProgram/s/partnerlocator) [CYBERFORCE CYBERFORCE represents the top 1% of partner engineers trusted for their security expertise. Learn more](https://www.paloaltonetworks.com/cyberforce?ts=markdown) * Company ![black arrow pointing left to go back to main navigation](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/arrow-right-black.svg) Company Palo Alto Networks * [About Us](https://www.paloaltonetworks.com/about-us?ts=markdown) * [Management Team](https://www.paloaltonetworks.com/about-us/management?ts=markdown) * [Investor Relations](https://investors.paloaltonetworks.com) * [Locations](https://www.paloaltonetworks.com/about-us/locations?ts=markdown) * [Ethics \& Compliance](https://www.paloaltonetworks.com/company/ethics-and-compliance?ts=markdown) * [Corporate Responsibility](https://www.paloaltonetworks.com/about-us/corporate-responsibility?ts=markdown) * [Military \& Veterans](https://jobs.paloaltonetworks.com/military) [Why Palo Alto Networks?](https://www.paloaltonetworks.com/why-paloaltonetworks?ts=markdown) * [Precision AI Security](https://www.paloaltonetworks.com/precision-ai-security?ts=markdown) * [Our Platform Approach](https://www.paloaltonetworks.com/why-paloaltonetworks/platformization?ts=markdown) * [Accelerate Your Cybersecurity Transformation](https://www.paloaltonetworks.com/why-paloaltonetworks/nam-cxo-portfolio?ts=markdown) * [Awards \& Recognition](https://www.paloaltonetworks.com/about-us/awards?ts=markdown) * [Customer Stories](https://www.paloaltonetworks.com/customers?ts=markdown) * [Global Certifications](https://www.paloaltonetworks.com/legal-notices/trust-center/compliance?ts=markdown) * [Trust 360 Program](https://www.paloaltonetworks.com/resources/whitepapers/trust-360?ts=markdown) Careers * [Overview](https://jobs.paloaltonetworks.com/) * [Culture \& Benefits](https://jobs.paloaltonetworks.com/en/culture/) [A Newsweek Most Loved Workplace "Businesses that do right by their employees" Read more](https://www.paloaltonetworks.com/company/press/2021/palo-alto-networks-secures-top-ranking-on-newsweek-s-most-loved-workplaces-list-for-2021?ts=markdown) * More ![black arrow pointing left to go back to main navigation](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/arrow-right-black.svg) More Resources * [Blog](https://www.paloaltonetworks.com/blog/?ts=markdown) * [Unit 42 Threat Research](https://unit42.paloaltonetworks.com/) * [Communities](https://www.paloaltonetworks.com/communities?ts=markdown) * [Content Library](https://www.paloaltonetworks.com/resources?ts=markdown) * [Cyberpedia](https://www.paloaltonetworks.com/cyberpedia?ts=markdown) * [Tech Insider](https://techinsider.paloaltonetworks.com/) * [Knowledge Base](https://knowledgebase.paloaltonetworks.com/) * [Palo Alto Networks TV](https://tv.paloaltonetworks.com/) * [Perspectives of Leaders](https://www.paloaltonetworks.com/perspectives/?ts=markdown) * [Cyber Perspectives Magazine](https://www.paloaltonetworks.com/cybersecurity-perspectives/cyber-perspectives-magazine?ts=markdown) * [Regional Cloud Locations](https://www.paloaltonetworks.com/products/regional-cloud-locations?ts=markdown) * [Tech Docs](https://docs.paloaltonetworks.com/) * [Security Posture Assessment](https://www.paloaltonetworks.com/security-posture-assessment?ts=markdown) * [Threat Vector Podcast](https://unit42.paloaltonetworks.com/unit-42-threat-vector-podcast/) * [Packet Pushers Podcasts](https://www.paloaltonetworks.com/podcasts/packet-pusher?ts=markdown) Connect * [LIVE community](https://live.paloaltonetworks.com/) * [Events](https://events.paloaltonetworks.com/) * [Executive Briefing Center](https://www.paloaltonetworks.com/about-us/executive-briefing-program?ts=markdown) * [Demos](https://www.paloaltonetworks.com/demos?ts=markdown) * [Contact us](https://www.paloaltonetworks.com/company/contact-sales?ts=markdown) [Blog Stay up-to-date on industry trends and the latest innovations from the world's largest cybersecurity Learn more](https://www.paloaltonetworks.com/blog/) * Sign In ![black arrow pointing left to go back to main navigation](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/arrow-right-black.svg) Sign In * Customer * Partner * Employee * [Login to download](https://www.paloaltonetworks.com/login?ts=markdown) * [Join us to become a member](https://www.paloaltonetworks.com/login?screenToRender=traditionalRegistration&ts=markdown) * EN ![black arrow pointing left to go back to main navigation](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/arrow-right-black.svg) Language * [USA (ENGLISH)](https://www.paloaltonetworks.com) * [AUSTRALIA (ENGLISH)](https://www.paloaltonetworks.com.au) * [BRAZIL (PORTUGUÉS)](https://www.paloaltonetworks.com.br) * [CANADA (ENGLISH)](https://www.paloaltonetworks.ca) * [CHINA (简体中文)](https://www.paloaltonetworks.cn) * [FRANCE (FRANÇAIS)](https://www.paloaltonetworks.fr) * [GERMANY (DEUTSCH)](https://www.paloaltonetworks.de) * [INDIA (ENGLISH)](https://www.paloaltonetworks.in) * [ITALY (ITALIANO)](https://www.paloaltonetworks.it) * [JAPAN (日本語)](https://www.paloaltonetworks.jp) * [KOREA (한국어)](https://www.paloaltonetworks.co.kr) * [LATIN AMERICA (ESPAÑOL)](https://www.paloaltonetworks.lat) * [MEXICO (ESPAÑOL)](https://www.paloaltonetworks.com.mx) * [SINGAPORE (ENGLISH)](https://www.paloaltonetworks.sg) * [SPAIN (ESPAÑOL)](https://www.paloaltonetworks.es) * [TAIWAN (繁體中文)](https://www.paloaltonetworks.tw) * [UK (ENGLISH)](https://www.paloaltonetworks.co.uk) * [Contact Us](https://www.paloaltonetworks.com/company/contact-sales?ts=markdown) * [What's New](https://www.paloaltonetworks.com/resources?ts=markdown) * [Get support](https://support.paloaltonetworks.com/SupportAccount/MyAccount) * [Under Attack?](https://start.paloaltonetworks.com/contact-unit42.html) * [Demos and Trials](https://www.paloaltonetworks.com/get-started?ts=markdown) Search All * [Tech Docs](https://docs.paloaltonetworks.com/search) Close search modal [Deploy Bravely --- Secure your AI transformation with Prisma AIRS](https://www.deploybravely.com) [](https://www.paloaltonetworks.com/?ts=markdown) 1. [Cyberpedia](https://www.paloaltonetworks.com/cyberpedia?ts=markdown) 2. [Network Security](https://www.paloaltonetworks.com/cyberpedia/network-security?ts=markdown) 3. [What Is Generative AI Security? \[Explanation/Starter Guide\]](https://www.paloaltonetworks.com/cyberpedia/what-is-generative-ai-security?ts=markdown) Table of contents * [Why is GenAI security important?](#why-is-genai-security-important) * [How does GenAI security work?](#how-does-genai-security-work) * [What are the different types of GenAI security?](#different-types-of-genai) * [What are the main GenAI security risks and threats?](#gen-ai-risks) * [How to secure GenAI in 5 steps](#gen-ai-5-steps) * [Top 12 GenAI security best practices](#gen-ai-best-practices) * [GenAI security FAQs](#gen-ai-security-faqs) # What Is Generative AI Security? \[Explanation/Starter Guide\] 11 min. read Table of contents * [Why is GenAI security important?](#why-is-genai-security-important) * [How does GenAI security work?](#how-does-genai-security-work) * [What are the different types of GenAI security?](#different-types-of-genai) * [What are the main GenAI security risks and threats?](#gen-ai-risks) * [How to secure GenAI in 5 steps](#gen-ai-5-steps) * [Top 12 GenAI security best practices](#gen-ai-best-practices) * [GenAI security FAQs](#gen-ai-security-faqs) 1. Why is GenAI security important? * [1. Why is GenAI security important?](#why-is-genai-security-important) * [2. How does GenAI security work?](#how-does-genai-security-work) * [3. What are the different types of GenAI security?](#different-types-of-genai) * [4. What are the main GenAI security risks and threats?](#gen-ai-risks) * [5. How to secure GenAI in 5 steps](#gen-ai-5-steps) * [6. Top 12 GenAI security best practices](#gen-ai-best-practices) * [7. GenAI security FAQs](#gen-ai-security-faqs) ![Image of buildings next to and AI icon, with a play button layered on top.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-generative-ai-security/what-is-generative-ai-security-video-thumbnail.png) close Generative AI security involves protecting the systems and data used by AI technologies that create new content. It ensures that the AI operates as intended and prevents harmful actions, such as unauthorized data manipulation or misuse. This includes maintaining the integrity of the AI and securing the content it generates against potential risks. ## Why is GenAI security important? Generative [AI security](https://www.paloaltonetworks.com/cyberpedia/ai-security) is important because it helps protect [AI](https://www.paloaltonetworks.com/cyberpedia/artificial-intelligence-ai) systems and their outputs from misuse, unauthorized access, and harmful manipulation. With the widespread adoption of GenAI in various industries, these technologies present new and evolving security risks. "By 2027, more than 40% of AI-related data breaches will be caused by the improper use of generative AI (GenAI) across borders," according to Gartner, Inc." [- Gartner Press Release, "Gartner Predicts 40% of AI Data Breaches Will Arise from Cross-Border GenAI Misuse by 2027," February 17, 2025.](https://www.gartner.com/en/newsroom/press-releases/2025-02-17-gartner-predicts-forty-percent-of-ai-data-breaches-will-arise-from-cross-border-genai-misuse-by-2027) As AI systems generate content, they also become targets for malicious actors aiming to exploit vulnerabilities in models, datasets, and applications. Which means that without strong security measures, AI systems can be manipulated to spread misinformation, cause data breaches, or even launch sophisticated cyberattacks. Plus: As AI technologies like [large language models (LLMs)](https://www.paloaltonetworks.com/cyberpedia/large-language-models-llm) become more integrated into business operations, they open new attack vectors. \*\*For example:\*\*AI models trained on vast datasets may inadvertently reveal sensitive or proprietary information. This exposure can lead to privacy violations or violations of data sovereignty regulations--especially when training data is aggregated from multiple sources across borders. Basically, GenAI security focuses on ensuring that GenAI technologies are deployed responsibly. And with controls in place to prevent security breaches, as well as protect both individuals **and** organizations. ***Note:*** *AI security-related terminology is rapidly evolving. GenAI security is a subset of AI security focused on the practice of protecting LLM models and containing the unsanctioned use of AI apps.* ## How does GenAI security work? GenAI security involves protecting the entire lifecycle of generative AI applications, from model development to deployment. ![Structured diagram titled The GenAI Security Framework on the left side in bold black text. A vertical line extends from the title and branches into five numbered steps, each enclosed in a diamond-shaped icon with an illustrative symbol. The numbers appear in sequential order from 1 to 5, formatted in a mix of blue, red, and black colors. The first, third, fourth, and fifth steps are outlined in blue, while the second step stands out with a red outline, visually differentiating it from the others. Each step is labeled in black text to the right of its corresponding icon. The first step, labeled Harden GenAI I/O integrity, features a diamond-shaped icon with a document-like symbol containing interconnected nodes, representing data integrity and structured information processing. The second step, labeled Protect GenAI data lifecycle, has a red-outlined diamond containing an eye symbol encircled by dotted and solid lines, emphasizing monitoring and oversight. The third step, labeled Secure GenAI system infrastructure, contains an icon with three interconnected circles, suggesting network security and structural resilience. The fourth step, labeled Enforce trustworthy GenAI governance, displays an icon with a document and a checkmark inside a square, indicating compliance, policies, and regulatory oversight. The fifth and final step, labeled Defend against adversarial GenAI threats, includes a globe icon overlaid with a shield, symbolizing global threat defense and cybersecurity protection. The steps are visually connected to the title through a clean and structured layout, using color and iconography to differentiate each security focus area.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-generative-ai-security/GenAI-Security-2025_19.png) At its core, GenAI follows a shared responsibility model. So both service providers and users have distinct security roles. **Service providers** are responsible for securing the infrastructure, training data, and models. Meanwhile, **users** have to manage the security of their data inputs, access controls, and any custom applications built around the AI models. Not to mention: Organizations have to address emerging security risks that are unique to generative AI, like model poisoning, prompt injection, data leakage, etc. At a high level, to secure generative AI, organizations should focus on several primary practices: First, **governance and compliance frameworks** are crucial. They guide how data is collected, used, and secured. **For example:** Ensuring that data privacy regulations, like GDPR, are adhered to during AI model training is essential. Second, **strong access control mechanisms** protect sensitive data. This includes implementing role-based access, encryption, and monitoring systems to track and control interactions with AI models. Finally, **continuous monitoring and threat detection systems** are necessary to identify and mitigate vulnerabilities as they arise, ensuring the AI systems remain secure over time. ## What are the different types of GenAI security? GenAI security spans multiple areas, each addressing different risks associated with AI development, deployment, and usage. Protecting AI models, data, and interactions calls for specialized security strategies to mitigate threats. Securing GenAI involves protecting the entire AI ecosystem, from the inputs it processes to the outputs it generates. The main types of of GenAI security include: * Large language model (LLM) security * AI prompt security * AI TRiSM (AI trust, risk, and security management) * GenAI data security * AI API security * AI code security ***Note:*** *GenAI security and its subsets are relatively new and changing quickly, as is GenAI security terminology. The following list is nonexhaustive and intended to provide a general overview of the primary GenAI security categories.* ### Large language model (LLM) security ![Circular diagram titled 4 pillars of LLM security in bold black text at the top, with a central circular icon featuring a neural network-like symbol representing large language model (LLM) security. Four labeled sections branch outward symmetrically, each representing a distinct pillar: Infrastructure security, Data security, Model security, and Ethical considerations, with unique color-coded designs. Infrastructure security, highlighted in blue at the top right, is connected to a network icon and includes elements like firewalls, encryption, hosting environment, intrusion detection, hardware protection, and physical security, with a Cybersecurity label placed within. Data security, marked in red at the top left and linked to a database icon, lists risks such as data leakage, data poisoning, and data privacy, along with security measures like encryption, access control, and data integrity, and is labeled with LLM failure and Cybersecurity. Model security, in teal at the bottom right, connects to a shield icon and outlines protective measures such as validation, authentication, and tamper protection, with an additional Cybersecurity label included. Ethical considerations, in green at the bottom left, links to a balance scale icon and addresses concerns such as bias, discrimination, toxicity, data integrity, access control, and encryption, while also covering misinformation, hallucination, and denial-of-service attacks, with labels for LLM failure and Cybersecurity. Each section extends outward with thin lines connecting security aspects to their respective categories, with distinct colors visually separating the pillars while maintaining a structured layout around the central neural network icon.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-generative-ai-security/GenAI-Security-2025_1.png "Circular diagram titled 4 pillars of LLM security in bold black text at the top, with a central circular icon featuring a neural network-like symbol representing large language model (LLM) security. Four labeled sections branch outward symmetrically, each representing a distinct pillar: Infrastructure security, Data security, Model security, and Ethical considerations, with unique color-coded designs. Infrastructure security, highlighted in blue at the top right, is connected to a network icon and includes elements like firewalls, encryption, hosting environment, intrusion detection, hardware protection, and physical security, with a Cybersecurity label placed within. Data security, marked in red at the top left and linked to a database icon, lists risks such as data leakage, data poisoning, and data privacy, along with security measures like encryption, access control, and data integrity, and is labeled with LLM failure and Cybersecurity. Model security, in teal at the bottom right, connects to a shield icon and outlines protective measures such as validation, authentication, and tamper protection, with an additional Cybersecurity label included. Ethical considerations, in green at the bottom left, links to a balance scale icon and addresses concerns such as bias, discrimination, toxicity, data integrity, access control, and encryption, while also covering misinformation, hallucination, and denial-of-service attacks, with labels for LLM failure and Cybersecurity. Each section extends outward with thin lines connecting security aspects to their respective categories, with distinct colors visually separating the pillars while maintaining a structured layout around the central neural network icon.") Large language model (LLM) security focuses on protecting AI systems that process and generate human-like text or other outputs based on large datasets. These models---like OpenAI's GPT---are widely used in applications like content creation, chatbots, and decision-making systems. LLM security aims to protect the models from unauthorized access, manipulation, and misuse. Effective security measures include controlling access to training data, securing model outputs, and preventing malicious input attacks that could compromise the system's integrity or cause harm. ### AI prompt security ![Infographic titled AI prompt security in bold black text at the top, with a light gray background featuring faint technical icons and a yellow border. The Palo Alto Networks logo is at the bottom. Nine security measures are displayed in white rectangular text boxes, each with a bold title, a brief description, and a unique circular icon. Input validation \& preprocessing (blue icon) ensures incoming data meets required formats. User education \& training (blue icon) equips users with security awareness. Execution isolation \& sandboxing (orange icon) limits code execution to controlled environments. Ongoing patches \& upgrades (yellow icon) emphasizes frequent system updates. Adversarial training \& augmentation (red icon) strengthens defenses by exposing systems to attack scenarios. Architectural protections \& air-gapping (blue icon) isolates critical systems from unsecured networks. Access controls \& rate limiting (purple icon) restricts resource access and request rates. Diversity, redundancy, \& segmentation (green icon) enhances security through backups and system isolation. Anomaly detection (red icon) monitors for irregular patterns, while Output monitoring \& alerting (yellow icon) continuously supervises system outputs and flags unusual activity. The structured design and distinct icons create a visually clear summary of AI prompt security measures.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-generative-ai-security/GenAI-Security-2025_2.png) AI prompt security ensures the inputs given to generative AI models result in safe, reliable, and compliant outputs. Prompts, or user inputs, are used to instruct AI models. Improper prompts can lead to outputs that are biased, harmful, or violate privacy regulations. To secure AI prompts, organizations implement strategies like structured prompt engineering and guardrails, which guide the AI's behavior and minimize risks. These controls help ensure that AI-generated content aligns with ethical and legal standards. And that prevents the model from producing misinformation or offensive material. ### AI TRiSM (AI trust, risk, and security management) ![Circular diagram illustrating the 4 pillars of AI trust, risk, security management (TRiSM) in bold black text at the top, with a thin yellow border. At the center, a brain-shaped AI icon is labeled AI TRiSM and is surrounded by a segmented ring divided into four equal sections, each representing a pillar. The Privacy pillar, located at the top left, is marked with a blue icon of a padlock and a user profile. The ModelOps pillar, positioned at the top right, has a blue icon depicting a workflow diagram. The Explainability/model monitoring pillar, located at the bottom right, is represented by a blue icon featuring a magnifying glass over a data chart. The AI application security pillar, at the bottom left, is marked by a green icon of a shield and interconnected nodes. The segmented ring is colored in alternating shades of blue and green, visually separating each pillar while maintaining a continuous circular flow.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-generative-ai-security/GenAI-Security-2025_3.png "Circular diagram illustrating the 4 pillars of AI trust, risk, security management (TRiSM) in bold black text at the top, with a thin yellow border. At the center, a brain-shaped AI icon is labeled AI TRiSM and is surrounded by a segmented ring divided into four equal sections, each representing a pillar. The Privacy pillar, located at the top left, is marked with a blue icon of a padlock and a user profile. The ModelOps pillar, positioned at the top right, has a blue icon depicting a workflow diagram. The Explainability/model monitoring pillar, located at the bottom right, is represented by a blue icon featuring a magnifying glass over a data chart. The AI application security pillar, at the bottom left, is marked by a green icon of a shield and interconnected nodes. The segmented ring is colored in alternating shades of blue and green, visually separating each pillar while maintaining a continuous circular flow.") AI TRiSM (trust, risk, and security management) is a comprehensive framework for managing the risks and ethical concerns associated with AI systems. It focuses on maintaining trust in AI systems by addressing challenges like algorithmic bias, data privacy, and explainability. The framework helps organizations manage risks by implementing principles like transparency, model monitoring, and privacy protection. Basically, AI TRiSM ensures that AI applications operate securely, ethically, and in compliance with regulations. Which promotes confidence in their use across industries. ### GenAI data security ![Structured diagram of GenAI data security measures with a hierarchical layout divided into multiple categories. At the top, the Front end section is marked in orange and includes authentication, access control, data validation, and response sanitization as key security measures. Below, the Back end section is highlighted in purple and consists of crypt controls, secrets management, secure API, and logging and monitoring to enhance security at the system level. Underneath, four categories—LLM framework, data, model, and agents—are displayed with blue labels, each containing security considerations. The LLM framework section addresses third-party component validation and data privacy and protection, while the data section emphasizes model training data retention security and data leakage \& content control. The model section highlights adversarial attack protection and single-tenant architecture, whereas the agents section focuses on reputation \& integrity checks and permission verification. At the bottom, a GenAI/LLM hosted infrastructure section in gray presents additional considerations, including business continuity, monitoring and incident response, patch management, and incident response, ensuring comprehensive security for AI systems.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-generative-ai-security/GenAI-Security-2025_4.png "Structured diagram of GenAI data security measures with a hierarchical layout divided into multiple categories. At the top, the Front end section is marked in orange and includes authentication, access control, data validation, and response sanitization as key security measures. Below, the Back end section is highlighted in purple and consists of crypt controls, secrets management, secure API, and logging and monitoring to enhance security at the system level. Underneath, four categories—LLM framework, data, model, and agents—are displayed with blue labels, each containing security considerations. The LLM framework section addresses third-party component validation and data privacy and protection, while the data section emphasizes model training data retention security and data leakage & content control. The model section highlights adversarial attack protection and single-tenant architecture, whereas the agents section focuses on reputation & integrity checks and permission verification. At the bottom, a GenAI/LLM hosted infrastructure section in gray presents additional considerations, including business continuity, monitoring and incident response, patch management, and incident response, ensuring comprehensive security for AI systems.") GenAI data security involves protecting sensitive data generative AI systems use to train models or generate outputs. Since AI models process large amounts of data (including personal and proprietary information), securing it is vital to prevent breaches or misuse. Key practices in GenAI data security include: * Implementing strong access controls * Anonymizing data to protect privacy * Regularly auditing models to detect biases or vulnerabilities In essence, GenAI data security protects sensitive info and aims to support compliance with regulations like GDPR. ### AI API security ![A structured graphical overview of Components of AI API Security with eight distinct sections, each enclosed in a rectangular box with a blue title and a brief description underneath. The Authentication and Authorization section highlights mechanisms like OAuth and multi-factor authentication (MFA) to ensure that only authorized users and systems access AI APIs. The Encryption section emphasizes securing data in transit and at rest to protect sensitive information during transmission and storage. The Input Validation section focuses on preventing malicious data from manipulating AI models by validating inputs and protecting against injection attacks. The Rate Limiting and Throttling section outlines restricting API requests within specific timeframes to prevent denial-of-service (DoS) attacks and ensure system stability. The API Monitoring and Logging section describes monitoring API activity and logging access to detect suspicious behavior or potential security breaches. The Threat Detection and Response section highlights the deployment of intrusion detection systems (IDS) and other tools to identify and mitigate API attacks. The Access Control and Segmentation section explains restricting access to sensitive API areas and segmenting data to minimize exposure to risks. The Security Patching and Updates section underscores the importance of regularly updating the API and components to address vulnerabilities and reduce security risks. The image uses a structured layout with blue accents, small icons above each section title, and an evenly spaced grid arrangement to visually categorize key AI API security components.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-generative-ai-security/GenAI-Security-2025_5.png "A structured graphical overview of Components of AI API Security with eight distinct sections, each enclosed in a rectangular box with a blue title and a brief description underneath. The Authentication and Authorization section highlights mechanisms like OAuth and multi-factor authentication (MFA) to ensure that only authorized users and systems access AI APIs. The Encryption section emphasizes securing data in transit and at rest to protect sensitive information during transmission and storage. The Input Validation section focuses on preventing malicious data from manipulating AI models by validating inputs and protecting against injection attacks. The Rate Limiting and Throttling section outlines restricting API requests within specific timeframes to prevent denial-of-service (DoS) attacks and ensure system stability. The API Monitoring and Logging section describes monitoring API activity and logging access to detect suspicious behavior or potential security breaches. The Threat Detection and Response section highlights the deployment of intrusion detection systems (IDS) and other tools to identify and mitigate API attacks. The Access Control and Segmentation section explains restricting access to sensitive API areas and segmenting data to minimize exposure to risks. The Security Patching and Updates section underscores the importance of regularly updating the API and components to address vulnerabilities and reduce security risks. The image uses a structured layout with blue accents, small icons above each section title, and an evenly spaced grid arrangement to visually categorize key AI API security components.") AI [API security](https://www.paloaltonetworks.com/cyberpedia/what-is-api-security) focuses on securing the application programming interfaces (APIs) that allow different systems to interact with AI models. APIs are very often the entry points for users and other applications to access generative AI services. Which makes them serious targets for attacks like [denial-of-service (DoS)](https://www.paloaltonetworks.com/cyberpedia/what-is-a-denial-of-service-attack-dos) or man-in-the-middle (MITM) attacks. AI-driven security measures help with protecting APIs from unauthorized access and manipulation, and include: * Predictive analytics * Threat detection * Biometric authentication Effectively, when organizations secure AI APIs, they're protecting the integrity and confidentiality of data transmitted between systems. ### AI code security ![Architecture diagram with labeled elements, titled AI code security. On the left, Proprietary data is represented by a stacked database icon, visually connecting to Supervised learning, which is depicted with a neural network icon. These elements feed into a central gray box labeled AI generated code, indicating the point where machine learning models generate code based on trained data. From this stage, a directional arrow leads to Static analysis of code, represented by a green circular icon. Below this step, a dashed line connects to Ongoing secure code training, emphasizing continuous improvement in security practices. The final stage, labeled Production, is represented by a set of interconnected gears, signifying deployment. The flowchart uses clean lines and minimalistic icons to depict the structured process of AI-generated code moving through validation before being deployed into production.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-generative-ai-security/GenAI-Security-2025_8.png "Architecture diagram with labeled elements, titled AI code security. On the left, Proprietary data is represented by a stacked database icon, visually connecting to Supervised learning, which is depicted with a neural network icon. These elements feed into a central gray box labeled AI generated code, indicating the point where machine learning models generate code based on trained data. From this stage, a directional arrow leads to Static analysis of code, represented by a green circular icon. Below this step, a dashed line connects to Ongoing secure code training, emphasizing continuous improvement in security practices. The final stage, labeled Production, is represented by a set of interconnected gears, signifying deployment. The flowchart uses clean lines and minimalistic icons to depict the structured process of AI-generated code moving through validation before being deployed into production.") AI code security is about making sure that code generated by AI models is safe and free from vulnerabilities. AI systems have limitations when it comes to understanding complex security contexts, which means they can produce code that inadvertently contains security flaws like SQL injection or cross-site scripting (XSS). To mitigate risks, organizations need to thoroughly review and test AI-generated code using [static code analysis](https://www.paloaltonetworks.com/cyberpedia/what-is-sast-static-application-security-testing) tools. Not to mention ensure that developers are trained in secure coding practices. Taking a proactive approach helps prevent vulnerabilities from reaching production systems. Which in the end, ensures the reliability and safety of AI-driven applications. ## What are the main GenAI security risks and threats? GenAI security risks stem from vulnerabilities in data, models, infrastructure, and user interactions. Threat actors can manipulate AI systems, exploit weaknesses in training data, or compromise APIs to gain unauthorized access. At its core: Securing GenAI requires addressing multiple attack surfaces that impact both the integrity of AI-generated content and the safety of the underlying systems. The primary security risks and threats associated with GenAI include: * Prompt injection attacks * AI system and infrastructure security * Insecure AI generated code * Data poisoning * AI supply chain vulnerabilities * AI-generated content integrity risks * Shadow AI * Sensitive data disclosure or leakage ***Note:*** *GenAI security risks and threats are rapidly evolving and subject to change.* | ***Further reading:** [Top GenAI Security Challenges: Risks, Issues, \& Solutions](https://www.paloaltonetworks.com/cyberpedia/generative-ai-security-risks)* ### Prompt injection attacks ![Architecture diagram illustrating a prompt injection attack through a two-step process. The first step, labeled STEP 1: The adversary plants indirect prompts, shows an attacker icon connected to a malicious prompt message, Your new task is: \[y\], which is then directed to a publicly accessible server. The second step, labeled STEP 2: LLM retrieves the prompt from a web resource, depicts a user requesting task \[x\] from an application-integrated LLM. Instead of performing the intended request, the LLM interacts with a poisoned web resource, which injects a manipulated instruction, Your new task is: \[y\]. This altered task is then executed, leading to unintended actions. The diagram uses red highlights to emphasize malicious interactions and structured arrows to indicate the flow of information between different entities involved in the attack.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-generative-ai-security/GenAI-Security-2025_6.png "Architecture diagram illustrating a prompt injection attack through a two-step process. The first step, labeled STEP 1: The adversary plants indirect prompts, shows an attacker icon connected to a malicious prompt message, Your new task is: [y], which is then directed to a publicly accessible server. The second step, labeled STEP 2: LLM retrieves the prompt from a web resource, depicts a user requesting task [x] from an application-integrated LLM. Instead of performing the intended request, the LLM interacts with a poisoned web resource, which injects a manipulated instruction, Your new task is: [y]. This altered task is then executed, leading to unintended actions. The diagram uses red highlights to emphasize malicious interactions and structured arrows to indicate the flow of information between different entities involved in the attack.") Prompt injection attacks manipulate the inputs given to AI systems, causing them to produce unintended or harmful outputs. These attacks exploit the AI's natural language processing capabilities by inserting malicious instructions into prompts. \*\*For example:\*\*Attackers can trick an AI model into revealing sensitive information or bypassing security controls. Because AI systems often rely on user inputs to generate responses, detecting malicious prompts remains a significant security challenge. | ***Further reading:** [What Is a Prompt Injection Attack? \[Examples \& Prevention\]](https://www.paloaltonetworks.com/cyberpedia/what-is-a-prompt-injection-attack)* ### AI system and infrastructure security ![Architecture diagram illustrating an example API vulnerability through a linear flow of compromised interactions. On the left, an attacker icon in a dark red box is connected by an arrow to a malicious code symbol, which is labeled in red italics. The arrow continues toward a central API icon, which is represented by a gear symbol inside a white-bordered box with a small red warning symbol at the top right corner. From the API, a thin arrow extends to the right, connecting to a LLM/AI icon, depicted as a neural network structure inside a white box. The directional flow visually represents how an attacker injects malicious code into an API, which then propagates through the system, ultimately affecting the AI model.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-generative-ai-security/example-api-vulnerability-genai.png "Architecture diagram illustrating an example API vulnerability through a linear flow of compromised interactions. On the left, an attacker icon in a dark red box is connected by an arrow to a malicious code symbol, which is labeled in red italics. The arrow continues toward a central API icon, which is represented by a gear symbol inside a white-bordered box with a small red warning symbol at the top right corner. From the API, a thin arrow extends to the right, connecting to a LLM/AI icon, depicted as a neural network structure inside a white box. The directional flow visually represents how an attacker injects malicious code into an API, which then propagates through the system, ultimately affecting the AI model.") Poorly secured AI infrastructure---including APIs, insecure plug-ins, and hosting environments---can expose systems to unauthorized access, model tampering, or denial-of-service attacks. **For example:** API vulnerabilities in GenAI systems can expose critical functions to attackers, allowing unauthorized access or manipulation of AI-generated outputs. Common vulnerabilities include broken authentication, improper input validation, and insufficient authorization. These weaknesses can lead to data breaches, unauthorized model manipulation, or denial-of-service attacks. So securing AI APIs requires robust authentication protocols, proper input validation, and monitoring for unusual activity. ### Insecure AI generated code ![Architecture diagram depicting an insecure AI-generated code scenario through a structured flowchart. On the left, three AI icons, each represented by a neural network symbol inside white boxes, are connected by arrows to a central pushing code icon, which is a gray circle containing a code symbol. From this point, an arrow extends rightward into a Git repository, represented by a white rectangular box with two blue buttons labeled Develop and Release. The Develop button is linked to a Testing icon, depicted as a circular symbol with a checklist, which is further connected to a Changes icon, represented by a gear. The Release button is connected downward to a Production label, which includes an icon of interconnected circles and the text Vulnerable code now in production, indicating that insecure AI-generated code has moved into the live environment. The directional flow visually represents how AI-generated code enters a repository, undergoes development and release stages, and ultimately reaches production with vulnerabilities intact.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-generative-ai-security/GenAI-Security-2025_15.png "Architecture diagram depicting an insecure AI-generated code scenario through a structured flowchart. On the left, three AI icons, each represented by a neural network symbol inside white boxes, are connected by arrows to a central pushing code icon, which is a gray circle containing a code symbol. From this point, an arrow extends rightward into a Git repository, represented by a white rectangular box with two blue buttons labeled Develop and Release. The Develop button is linked to a Testing icon, depicted as a circular symbol with a checklist, which is further connected to a Changes icon, represented by a gear. The Release button is connected downward to a Production label, which includes an icon of interconnected circles and the text Vulnerable code now in production, indicating that insecure AI-generated code has moved into the live environment. The directional flow visually represents how AI-generated code enters a repository, undergoes development and release stages, and ultimately reaches production with vulnerabilities intact.") Insecure AI-generated code refers to software produced by AI models that contain security flaws, such as improper validation or outdated dependencies. Since AI models are trained on existing code, they can inadvertently replicate vulnerabilities found in the training data. These flaws can lead to system failures, unauthorized access, or other cyberattacks. Thorough code review and testing are essential to mitigate the risks posed by AI-generated code. ### Data poisoning Data poisoning involves maliciously altering the training data used to build AI models, causing them to behave unpredictably or maliciously. By injecting misleading or biased data into the dataset, attackers can influence the model's outputs to favor certain actions or outcomes. ![Architecture diagram illustrating a data poisoning attack by depicting the flow of compromised training data into a machine learning system. On the left, a red icon labeled Poisoning samples with a silhouette of an attacker connects downward to a Training data icon, represented by a database symbol. An arrow extends rightward to a Bad data icon, signifying the introduction of manipulated or corrupted data into the training set. The next stage, labeled Deployed, transitions to an ML-based service, represented by a circular neural network icon. Above this stage, an Input label indicates the data fed into the model after deployment. On the right, three red arrows point outward from the ML-based service, each leading to separate labels: Accuracy drop, Misclassifications, and Backdoor triggering, illustrating the potential consequences of the poisoned data during the Testing (or inference) phase. A thin horizontal line at the bottom divides the Training phase from the Testing (or inference) phase, visually differentiating the stages of the attack.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-generative-ai-security/GenAI-Security-2025_9.png "Architecture diagram illustrating a data poisoning attack by depicting the flow of compromised training data into a machine learning system. On the left, a red icon labeled Poisoning samples with a silhouette of an attacker connects downward to a Training data icon, represented by a database symbol. An arrow extends rightward to a Bad data icon, signifying the introduction of manipulated or corrupted data into the training set. The next stage, labeled Deployed, transitions to an ML-based service, represented by a circular neural network icon. Above this stage, an Input label indicates the data fed into the model after deployment. On the right, three red arrows point outward from the ML-based service, each leading to separate labels: Accuracy drop, Misclassifications, and Backdoor triggering, illustrating the potential consequences of the poisoned data during the Testing (or inference) phase. A thin horizontal line at the bottom divides the Training phase from the Testing (or inference) phase, visually differentiating the stages of the attack.") This can result in erroneous predictions, vulnerabilities, or biased decision-making. Preventing data poisoning requires secure data collection practices and monitoring for unusual patterns in training datasets. ### AI supply chain vulnerabilities ![Architecture diagram depicting model theft through an example of a model extraction approach, illustrating the unauthorized replication of a machine learning model. On the left, a Data owner icon, represented by a laptop, is linked to a rightward arrow labeled Train model, directing toward a large blue ML service box in the center. Inside this blue box, a database icon and circular gears represent the model’s internal workings. On the right side of the ML service, a sequence of inputs and outputs is shown with X₁, Xg representing queries and f(X₁), f(Xg) representing the model's corresponding responses. These values are sent to an Extraction adversary, depicted in a red-outlined box containing a silhouette of an attacker above a laptop. The final element, labeled f, represents the adversary’s attempt to reconstruct the model using the stolen outputs.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-generative-ai-security/GenAI-Security-2025_10.png "Architecture diagram depicting model theft through an example of a model extraction approach, illustrating the unauthorized replication of a machine learning model. On the left, a Data owner icon, represented by a laptop, is linked to a rightward arrow labeled Train model, directing toward a large blue ML service box in the center. Inside this blue box, a database icon and circular gears represent the model’s internal workings. On the right side of the ML service, a sequence of inputs and outputs is shown with X₁, Xg representing queries and f(X₁), f(Xg) representing the model's corresponding responses. These values are sent to an Extraction adversary, depicted in a red-outlined box containing a silhouette of an attacker above a laptop. The final element, labeled f, represents the adversary’s attempt to reconstruct the model using the stolen outputs.") Many organizations rely on third-party models, open-source datasets, and pre-trained AI services. Which introduces risks like model backdoors, poisoned datasets, and compromised training pipelines. **For example:** Model theft, or model extraction, occurs when attackers steal the architecture or parameters of a trained AI model. This can be done by querying the model and analyzing its responses to infer its inner workings. Put simply, stolen models allow attackers to bypass the effort and cost required to train high-quality AI systems. Protecting against model theft involves: * Implementing access controls * Limiting the ability to query models * Securing model deployment environments ### AI-generated content integrity risks (biases, misinformation, and hallucinations) GenAI models can amplify bias, generate misleading information, or hallucinate entirely false outputs. ![A circular framework divided into four quadrants, each representing a category of AI bias and inequality. The center of the diagram contains a circular structure labeled WORLD, DATA, DESIGN, USE, with arrows indicating their interconnection. The top left quadrant, shaded in blue, is titled Real world patterns of health inequality and discrimination and contains three subcategories: Discriminatory healthcare processes, Unequal access and resource allocation, and Biased clinical decision making, each represented by icons depicting healthcare, financial imbalance, and decision-making. The top right quadrant, shaded in red, is labeled Discriminatory data, featuring two key issues: Patterns of bias and discrimination baked into data distributions and Sampling biases and lack of representative datasets, with icons depicting data analysis and dataset sampling. The bottom right quadrant, shaded in blue, is titled Biased AI design and deployment practices, listing Biased and exclusionary design, model building and testing practices, Power imbalances in agenda setting and problem formulation, and Biased deployment, explanation and system monitoring practices, accompanied by icons representing system development and decision-making. The bottom left quadrant, shaded in green, is labeled Application injustices, containing Exacerbating global health inequality and rich-poor treatment gaps, Disregarding and deepening digital divides, and Hazardous and discriminatory repurposing of biased AI systems, with icons symbolizing digital access, societal disparity, and biased AI usage. The circular structure visually connects these issues, illustrating their impact across AI systems.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-generative-ai-security/GenAI-Security-2025-11.png "A circular framework divided into four quadrants, each representing a category of AI bias and inequality. The center of the diagram contains a circular structure labeled WORLD, DATA, DESIGN, USE, with arrows indicating their interconnection. The top left quadrant, shaded in blue, is titled Real world patterns of health inequality and discrimination and contains three subcategories: Discriminatory healthcare processes, Unequal access and resource allocation, and Biased clinical decision making, each represented by icons depicting healthcare, financial imbalance, and decision-making. The top right quadrant, shaded in red, is labeled Discriminatory data, featuring two key issues: Patterns of bias and discrimination baked into data distributions and Sampling biases and lack of representative datasets, with icons depicting data analysis and dataset sampling. The bottom right quadrant, shaded in blue, is titled Biased AI design and deployment practices, listing Biased and exclusionary design, model building and testing practices, Power imbalances in agenda setting and problem formulation, and Biased deployment, explanation and system monitoring practices, accompanied by icons representing system development and decision-making. The bottom left quadrant, shaded in green, is labeled Application injustices, containing Exacerbating global health inequality and rich-poor treatment gaps, Disregarding and deepening digital divides, and Hazardous and discriminatory repurposing of biased AI systems, with icons symbolizing digital access, societal disparity, and biased AI usage. The circular structure visually connects these issues, illustrating their impact across AI systems.") Source: [https://www.bmj.com/content/372/bmj.n304](https://www.bmj.com/content/372/bmj.n304) These risks undermine trust, create compliance issues, and can be exploited by attackers for manipulation. ![Graphic with a white background which has a structured layout divided into two main sections. On the left, three vertically aligned blue icons are enclosed in circular outlines, each representing different aspects of AI hallucinations. The top icon features a database symbol, the middle icon shows a document with a question mark, and the bottom icon displays a robot head with a red X underneath it. Next to the icons, a vertical line with small punctuation symbols, including an exclamation mark and a question mark, visually connects them. The right side contains bold black text at the top that states, What is an AI hallucination? Below the title, a paragraph in black text explains that an AI hallucination occurs when artificial intelligence generates incorrect, misleading, or unfounded information and describes how AI can produce confident but unjustified responses. A separate white box with a light gray outline and blue Example: text highlights a cybersecurity-related instance, explaining that a model trained on incorrect threat data may falsely identify non-existent threats.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-generative-ai-security/GenAI-Security-2025_20-1-1.png "Graphic with a white background which has a structured layout divided into two main sections. On the left, three vertically aligned blue icons are enclosed in circular outlines, each representing different aspects of AI hallucinations. The top icon features a database symbol, the middle icon shows a document with a question mark, and the bottom icon displays a robot head with a red X underneath it. Next to the icons, a vertical line with small punctuation symbols, including an exclamation mark and a question mark, visually connects them. The right side contains bold black text at the top that states, What is an AI hallucination? Below the title, a paragraph in black text explains that an AI hallucination occurs when artificial intelligence generates incorrect, misleading, or unfounded information and describes how AI can produce confident but unjustified responses. A separate white box with a light gray outline and blue Example: text highlights a cybersecurity-related instance, explaining that a model trained on incorrect threat data may falsely identify non-existent threats.") **For example:** AI systems can develop biases based on the data they are trained on, and attackers may exploit these biases to manipulate the system. For instance, biased models may fail to recognize certain behaviors or demographic traits, allowing attackers to exploit these gaps. Addressing AI biases involves regular audits, using diverse datasets, and implementing fairness algorithms to ensure that AI models make unbiased decisions. ### Shadow AI Shadow AI refers to the unauthorized use of AI tools by employees or individuals within an organization without the oversight of IT or security teams. ![The image displays a structured layout with three interconnected circular icons, each representing a different risk associated with Shadow AI. At the top center, the title Shadow AI is written in bold black text with a thin horizontal line beneath it. Below the title, three numbered risks are arranged in a triangular formation, with the first on the left, the second in the middle, and the third on the right. Each risk is accompanied by a blue circular icon containing a white pictogram. The first risk, labeled 1. Generating misinformation (and acting on it), is positioned on the left and features an icon of a speech bubble with a question mark inside, enclosed within a semicircular blue arc. The second risk, labeled 2. Exposing proprietary company information to LLM manipulation, is centrally located and highlighted with a slightly larger icon that depicts a stack of database disks with exclamation marks, indicating sensitive information exposure. The third risk, labeled 3. Opening up customer data to unknown risks, is on the right and features an icon of an eye with a triangular warning symbol, also enclosed within a blue semicircular arc. Thin blue lines connect each icon to its respective title, visually linking the risks under the overarching theme of Shadow AI.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-generative-ai-security/GenAI-Security-2025_12.png "The image displays a structured layout with three interconnected circular icons, each representing a different risk associated with Shadow AI. At the top center, the title Shadow AI is written in bold black text with a thin horizontal line beneath it. Below the title, three numbered risks are arranged in a triangular formation, with the first on the left, the second in the middle, and the third on the right. Each risk is accompanied by a blue circular icon containing a white pictogram. The first risk, labeled 1. Generating misinformation (and acting on it), is positioned on the left and features an icon of a speech bubble with a question mark inside, enclosed within a semicircular blue arc. The second risk, labeled 2. Exposing proprietary company information to LLM manipulation, is centrally located and highlighted with a slightly larger icon that depicts a stack of database disks with exclamation marks, indicating sensitive information exposure. The third risk, labeled 3. Opening up customer data to unknown risks, is on the right and features an icon of an eye with a triangular warning symbol, also enclosed within a blue semicircular arc. Thin blue lines connect each icon to its respective title, visually linking the risks under the overarching theme of Shadow AI.") These unsanctioned tools, although often used to improve productivity, can absolutely expose sensitive data or create compliance issues. To manage shadow AI risks, organizations have to have clear policies for AI tool usage and strong oversight to be sure that all AI applications comply with security protocols. ### Sensitive data disclosure or leakage ![Graphic representing six causes of GenAI data leakage, structured along an interconnected, continuous orange pathway with circular nodes highlighting each cause. At the top center, the title GenAI data leakage causes is displayed in bold black text with a thin gray underline. The pathway begins on the left with the first node labeled 1. Unnecessary inclusion of sensitive information in training data, featuring an icon of a document with a lock, representing data security risks in training. The second node, labeled 2. Overfitting, contains an icon of a fluctuating data graph, indicating a model’s tendency to memorize training data too closely. The third node, labeled 3. Use of 3rd party AI services, includes an icon of interconnected nodes, illustrating potential vulnerabilities when integrating external AI services. The fourth node, labeled 4. Prompt injection attack, has an icon depicting a manipulated prompt, indicating how malicious inputs can exploit AI models. The fifth node, labeled 5. Data interception over the network, features an icon of a network connection with a security breach, representing risks of unauthorized data access during transmission. The final node, labeled 6. Leakage of stored model output, includes an icon of a database stack, indicating the risk of sensitive model outputs being unintentionally exposed. The orange pathway visually connects all six nodes in a continuous flow, emphasizing the interconnected nature of these risks.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-generative-ai-security/GenAI-Security-2025_13.png "Graphic representing six causes of GenAI data leakage, structured along an interconnected, continuous orange pathway with circular nodes highlighting each cause. At the top center, the title GenAI data leakage causes is displayed in bold black text with a thin gray underline. The pathway begins on the left with the first node labeled 1. Unnecessary inclusion of sensitive information in training data, featuring an icon of a document with a lock, representing data security risks in training. The second node, labeled 2. Overfitting, contains an icon of a fluctuating data graph, indicating a model’s tendency to memorize training data too closely. The third node, labeled 3. Use of 3rd party AI services, includes an icon of interconnected nodes, illustrating potential vulnerabilities when integrating external AI services. The fourth node, labeled 4. Prompt injection attack, has an icon depicting a manipulated prompt, indicating how malicious inputs can exploit AI models. The fifth node, labeled 5. Data interception over the network, features an icon of a network connection with a security breach, representing risks of unauthorized data access during transmission. The final node, labeled 6. Leakage of stored model output, includes an icon of a database stack, indicating the risk of sensitive model outputs being unintentionally exposed. The orange pathway visually connects all six nodes in a continuous flow, emphasizing the interconnected nature of these risks.") Sensitive data disclosure or leakage happens when AI models inadvertently reveal confidential or personal information. This can occur through overfitting, where the model outputs data too closely tied to its training set, or through vulnerabilities like prompt injection. Preventing GenAI data leakage involves: * Anonymizing sensitive information * Enforcing access controls * Regularly testing models ## How to secure GenAI in 5 steps ![Graphic with a structured visual representation of five key steps to securing generative AI under the title How to secure GenAI. Each step is numbered and accompanied by an icon enclosed in a diamond shape, with a mix of blue, orange, and black colors. The first step, Harden GenAI I/O integrity, is marked with a blue icon featuring interconnected elements and includes recommendations to validate and sanitize input data, minimize sensitive or malicious output, and enforce input and output validation. The second step, Protect GenAI data lifecycle, has an orange icon with a circular element in the center and emphasizes safeguarding training data integrity, encrypting data, enforcing access controls, and ensuring training on reliable datasets. The third step, Secure GenAI system infrastructure, is denoted with a blue icon depicting connected nodes and focuses on preventing unauthorized access, securing against malicious plug-ins, and mitigating denial-of-service attacks. The fourth step, Enforce trustworthy GenAI governance, is represented by a blue icon resembling a document with a checkmark and outlines the importance of model verification, explainability, bias detection, and alignment with ethical standards. The fifth step, Defend against adversarial GenAI threats, has a black icon with a globe and network lines and highlights proactive threat intelligence, anomaly detection, and incident response planning. The content is structured in a left-aligned vertical format, with numbered steps in bold, supporting text in bullet points, and a color-coded design that distinguishes each section.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-generative-ai-security/GenAI-Security-2025_14.png) Understanding the full scope of GenAI security requires a well-rounded framework that offers clarity on the various challenges, potential attack vectors, and stages involved in GenAI security. Your organization can better identify **and** tackle unique GenAI security issues using this five-step process: 1. Harden GenAI I/O integrity 2. Protect GenAI data lifecycle 3. Secure GenAI system infrastructure 4. Enforce trustworthy GenAI governance 5. Defend against adversarial GenAI threats This framework provides a complete understanding of GenAI security issues by addressing its interdependencies. Using a comprehensive approach is really important for taking advantage of the full potential of GenAI technologies while effectively managing the security risks they bring. ### Step 1: Harden GenAI I/O integrity Generative AI is only as secure as the inputs it processes and the outputs it generates. That's why it's important to validate and sanitize input data to block jailbreak attempts and prompt injection attacks. At the same time, output filtering helps prevent malicious or sensitive content from slipping through. ***Tip:*** *Don't forget that even well-structured input can contain hidden threats, like encoded malicious commands or fragmented payloads that bypass simple validation. To combat this, use a multi-layered approach to input validation. **For example:** Combine rule-based filters with AI-driven anomaly detection to catch complex obfuscation techniques.* ### Step 2: Protect GenAI data lifecycle AI models rely on vast amounts of data, which makes securing that data a top priority. Protecting training data from poisoning and leakage keeps models reliable and trustworthy. Encryption, access controls, and secure handling practices help ensure sensitive information stays protected---and that models generate accurate and responsible outputs. ### Step 3: Secure GenAI system infrastructure The infrastructure hosting GenAI models needs strong protections against unauthorized access and malicious activity. That means securing against vulnerabilities like insecure plug-ins and preventing denial-of-service attacks that could disrupt operations. A resilient system infrastructure ensures models remain available, reliable, and secure. ***Tip:*** *A common oversight in AI security is the reliance on default security settings in third-party plugins and libraries, which can introduce vulnerabilities. Be sure to apply the [principle of least privilege](https://www.paloaltonetworks.com/cyberpedia/what-is-the-principle-of-least-privilege) to all AI-related infrastructure components. Restrict access to only what's necessary, and segment AI workloads to limit potential attack impact.* ### Step 4: Enforce trustworthy GenAI governance AI models should behave predictably and align with ethical and business objectives. That starts with using verification, explainability, and bias detection techniques to prevent unintended outcomes. A strong governance approach ensures that AI remains fair, accountable, and in line with organizational standards. ***Note:*** *Explainability isn't just an ethical concern---it's a security one. If a model's decision-making process isn't transparent, it's harder to spot adversarial manipulation.* ### Step 5: Defend against adversarial GenAI threats Attackers are finding new ways to exploit AI, so staying ahead of emerging threats is key. Proactive [threat intelligence](https://www.paloaltonetworks.com/cyberpedia/what-is-a-threat-intelligence-platform), anomaly detection, and [incident response planning](https://www.paloaltonetworks.com/cyberpedia/incident-response-plan) help organizations detect and mitigate risks before they escalate. A strong defense keeps AI models secure and resilient against evolving cyber threats. | ***Further reading:** [What Is AI Governance?](https://www.paloaltonetworks.com/cyberpedia/ai-governance)* ## Top 12 GenAI security best practices Securing generative AI requires a proactive approach to identifying and mitigating risks. Organizations absolutely must implement strong security measures that protect AI models, data, and infrastructure from evolving threats. The following best practices will help ensure AI systems remain secure, resilient, and compliant with regulatory standards. ![Infographic presenting a structured list titled Top 12 GenAI security best practices in bold orange text at the top with twelve best practices displayed in a vertical sequence. Each number is in red and positioned to the left of the corresponding security practice, aligned with circular icons containing minimalistic black-and-white illustrations related to AI security. The list begins with Conduct risk assessments for new AI vendors followed by Mitigate security threats in AI agents, which addresses the need for securing autonomous AI functions. The third item, Eliminate shadow AI, emphasizes governance and oversight, while the fourth, Implement explainable AI, focuses on transparency in AI decision-making. The fifth best practice, Deploy continuous monitoring and vulnerability management, is positioned centrally in the list and is followed by Execute regular AI audits, which highlights periodic security assessments. The seventh item, Conduct adversarial testing and defense, ensures AI resilience against manipulative inputs and attacks. The eighth, Create and maintain an AI-BOM, emphasizes tracking AI components to mitigate third-party risks. The ninth, Employ input security and control, focuses on preventing unauthorized or harmful inputs from influencing AI outputs. The tenth, Use RLHF and constitutional AI, highlights reinforcement learning with human oversight to refine AI behavior. The eleventh best practice, Create a safe environment and protect against data loss, ensures AI applications remain secure, and the final practice, Stay on top of new risks to AI models, encourages continuous adaptation to emerging threats. The entire layout is visually structured with interconnected circuit-like lines in the background, reinforcing a high-tech theme with simple, consistent iconography that maintains a clean and organized appearance.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-generative-ai-security/GenAI-Security-2025_21.png) ### 1. Conduct risk assessments for new AI vendors When integrating new AI vendors, it's critical to assess the security risks associated with their technology. A risk assessment helps identify potential vulnerabilities, such as data breaches, privacy concerns, and the overall reliability of the AI vendor's system. Make sure to evaluate their compliance with recognized standards like GDPR or SOC 2 and ensure their data handling practices are secure. ***Tip:*** *Don't just review documentation---request detailed audit logs or third-party assessment reports from the vendor. These artifacts can offer insight into real-world incidents and how the vendor responded, which often reveals more than policy statements alone.* ### 2. Mitigate security threats in AI agents AI agents, though beneficial, introduce unique security challenges because of their autonomous nature. To mitigate risks, ensure that AI agents are constantly monitored for irregular behavior. Don't forget to implement access control mechanisms to limit their actions. Adopting robust anomaly detection and encryption practices can also help protect against unauthorized data access or malicious activity by AI agents. ***Tip:*** *Isolate AI agents in sandbox environments during initial deployment phases. This allows you to monitor real behavior patterns in a controlled setting before granting access to sensitive systems or data.* ### 3. Eliminate shadow AI The unauthorized use of AI tools within an organization poses security and compliance risks. To prevent shadow AI, implement strict governance and visibility into AI usage across departments, including: * Regular audits * Monitoring usage patterns * Educating employees about approved AI tools ***Tip:*** *Add AI-specific categories to your existing asset discovery tools. This makes it easier to automatically detect and flag unauthorized AI tools across the environment, especially in environments where AI usage may not be fully visible to security teams.* ### 4. Implement explainable AI Explainable AI (XAI) ensures transparency by providing clear, understandable explanations of how AI models make decisions. ![Architecture diagram illustrating the concept of Explainable AI (XAI) by comparing traditional AI decision-making with an explainable AI model. At the top, the TODAY section represents the current AI process, where training data flows into a machine learning process that generates a learned function. This function leads to a decision or recommendation that is presented to the user, who is depicted as an orange icon of a laptop. To the right, a white speech bubble lists user questions such as Why did you do that?, Why not something else?, When do you succeed?, When do you fail?, When can I trust you?, and How do I correct an error?, indicating a lack of transparency in current AI decision-making. Below, the XAI section introduces an improved approach where training data enters a new machine learning process, producing an explainable model that interacts with an explainable interface before reaching the user. This additional layer provides clarity, as indicated by a new speech bubble containing statements like I understand why, I understand why not, I know when you succeed, I know when you fail, I know when to trust you, and I know why you erred, demonstrating the enhanced interpretability of AI-driven decisions. The structure visually contrasts the opaque nature of traditional AI with the transparency and user comprehension enabled by XAI.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-generative-ai-security/GenAI-Security-2025_16.png "Architecture diagram illustrating the concept of Explainable AI (XAI) by comparing traditional AI decision-making with an explainable AI model. At the top, the TODAY section represents the current AI process, where training data flows into a machine learning process that generates a learned function. This function leads to a decision or recommendation that is presented to the user, who is depicted as an orange icon of a laptop. To the right, a white speech bubble lists user questions such as Why did you do that?, Why not something else?, When do you succeed?, When do you fail?, When can I trust you?, and How do I correct an error?, indicating a lack of transparency in current AI decision-making. Below, the XAI section introduces an improved approach where training data enters a new machine learning process, producing an explainable model that interacts with an explainable interface before reaching the user. This additional layer provides clarity, as indicated by a new speech bubble containing statements like I understand why, I understand why not, I know when you succeed, I know when you fail, I know when to trust you, and I know why you erred, demonstrating the enhanced interpretability of AI-driven decisions. The structure visually contrasts the opaque nature of traditional AI with the transparency and user comprehension enabled by XAI.") This is particularly important in security-critical systems where understanding the model's behavior is essential for trust and accountability. Incorporating XAI techniques into generative AI applications can help mitigate risks related to biases, errors, and unexpected outputs. ### 5. Deploy continuous monitoring and vulnerability management Continuous monitoring is essential to detect security threats in real-time. By closely monitoring model inputs, outputs, and performance metrics, organizations can quickly identify vulnerabilities and address them before they lead to significant harm. Integrating vulnerability management systems into AI infrastructure also helps in identifying and patching security flaws promptly. ### 6. Execute regular AI audits Regular AI audits assess the integrity, security, and compliance of AI models. AI audits will ensure AI models are safe **and** operate within defined standards. AI audits should cover areas like model performance, data privacy, and ethical concerns. A comprehensive audit can help organizations detect hidden vulnerabilities, ensure the ethical use of AI, and maintain adherence to regulatory requirements. ### 7. Conduct adversarial testing and defense Adversarial testing simulates potential attacks on AI systems to assess their resilience. ![Architecture diagram illustrating the process of adversarial testing in a language model system by following the flow of a user input query through classification and generative models. The process begins with a user input query, represented by a white icon containing a figure. This query is first analyzed by a classification language model (LM) to determine if it is harmful. If classified as harmful, a green Yes label directs the query away from further processing. If classified as not harmful, a red No label routes the input to a generative LM, depicted in blue, which updates and rephrases the input before feeding it into the system again for a final answer. The rephrased input produces an output answer, shown in a blue oval, which then passes through another classification LM for additional validation. The second classification step once again checks whether the output is harmful. If deemed safe, the output is finalized and displayed as an output answer in a blue oval. If the response is classified as harmful, a red No label directs it to an output error, represented by an orange box. This structured process visually depicts how adversarial testing is used to refine language model outputs by iterating between classification and generative processes to detect and mitigate harmful responses.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-generative-ai-security/GenAI-Security-2025_17.png "Architecture diagram illustrating the process of adversarial testing in a language model system by following the flow of a user input query through classification and generative models. The process begins with a user input query, represented by a white icon containing a figure. This query is first analyzed by a classification language model (LM) to determine if it is harmful. If classified as harmful, a green Yes label directs the query away from further processing. If classified as not harmful, a red No label routes the input to a generative LM, depicted in blue, which updates and rephrases the input before feeding it into the system again for a final answer. The rephrased input produces an output answer, shown in a blue oval, which then passes through another classification LM for additional validation. The second classification step once again checks whether the output is harmful. If deemed safe, the output is finalized and displayed as an output answer in a blue oval. If the response is classified as harmful, a red No label directs it to an output error, represented by an orange box. This structured process visually depicts how adversarial testing is used to refine language model outputs by iterating between classification and generative processes to detect and mitigate harmful responses.") By testing how AI models respond to manipulative inputs, security teams can identify weaknesses and improve system defenses. Implementing defenses such as input validation, anomaly detection, and redundancy can help protect AI systems from adversarial threats and reduce the risk of exploitation. ### 8. Create and maintain an AI-BOM An AI bill of materials (AI-BOM) is a comprehensive record of all the components used in AI systems, from third-party libraries to datasets. ![Image presenting an AI bill of materials (AI-BOM) framework with four key components, each enclosed in a rectangular white box with rounded edges. The top-left box is labeled Pre-trained modified models in bold text, followed by a description stating 3rd party models in a smaller font. A blue circular icon with a white checkmark is positioned on the left side of this box. The top-right box is labeled Monitoring GenAI output/code in bold text, with a supporting description GenAI security review underneath. A similar blue checkmark icon is placed to the left of this box. The bottom-left box is labeled Model dependencies in bold text, followed by two smaller lines reading AI frameworks and AI logging. A blue checkmark icon is also placed to the left of this section. The bottom-right box is labeled Data lineage in bold text, with two supporting questions underneath in a smaller font: Who owned it? and Who labeled it? Another blue checkmark icon is positioned to the left of this box. The four sections are evenly spaced in a two-by-two grid layout, with the title AI-BOM (AI bill of materials) centered at the top in bold black text. The design uses a minimalistic color scheme with a predominantly white background, black text, and blue icons for emphasis.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-generative-ai-security/GenAI-Security-2025_18.png "Image presenting an AI bill of materials (AI-BOM) framework with four key components, each enclosed in a rectangular white box with rounded edges. The top-left box is labeled Pre-trained modified models in bold text, followed by a description stating 3rd party models in a smaller font. A blue circular icon with a white checkmark is positioned on the left side of this box. The top-right box is labeled Monitoring GenAI output/code in bold text, with a supporting description GenAI security review underneath. A similar blue checkmark icon is placed to the left of this box. The bottom-left box is labeled Model dependencies in bold text, followed by two smaller lines reading AI frameworks and AI logging. A blue checkmark icon is also placed to the left of this section. The bottom-right box is labeled Data lineage in bold text, with two supporting questions underneath in a smaller font: Who owned it? and Who labeled it? Another blue checkmark icon is positioned to the left of this box. The four sections are evenly spaced in a two-by-two grid layout, with the title AI-BOM (AI bill of materials) centered at the top in bold black text. The design uses a minimalistic color scheme with a predominantly white background, black text, and blue icons for emphasis.") Maintaining a detailed AI-BOM ensures that only approved components are used. Which helps your organization manage risks associated with third-party vulnerabilities and software supply chain threats. It also enhances transparency and helps in compliance with regulatory standards. ### 9. Employ input security and control To prevent AI systems from being manipulated by harmful inputs, it's important to implement strong input validation and prompt sanitization. By filtering and verifying data before processing, it's much easier to avoid issues like data poisoning or prompt injection attacks. This practice is critical for ensuring that only legitimate, safe inputs are fed into the system, maintaining the integrity of AI outputs. ***Tip:*** *Test your input validation methods using adversarial prompts. This helps expose blind spots in your controls and confirms whether prompt sanitization is functioning as expected under real-world attack conditions.* ### 10. Use RLHF and constitutional AI Reinforcement learning with human feedback (RLHF) and constitutional AI are techniques that incorporate human oversight to improve AI model security. RLHF allows AI systems to be fine-tuned based on human feedback, enhancing their ability to operate safely. Constitutional AI, on the other hand, involves using separate AI models to evaluate and refine the outputs of the primary system. Which leads to greater robustness and security. ***Tip:*** *Maintain version control and audit trails for human feedback used in RLHF. This not only improves traceability but also makes it easier to investigate regressions or unexpected behavior resulting from past tuning cycles.* ### 11. Create a safe environment and protect against data loss To safeguard sensitive data, create a secure environment for AI applications that limits data exposure. By isolating confidential information in secure environments and employing encryption, you'll be able to reduce the risk of [data leaks](https://www.paloaltonetworks.com/cyberpedia/data-leak). A few tips for protecting against unauthorized data access: * Implement access controls * Use sandboxes * Allow only authorized users to interact with sensitive AI systems ***Tip:*** *Establish time-bound access windows for high-sensitivity data used in GenAI training or operations. This ensures that exposure is limited even if credentials are compromised or access controls fail.* ### 12. Stay on top of new risks to AI models The rapid evolution of generative AI introduces new security risks that organizations have to address constantly. That's what makes keeping up with emerging threats like prompt injection attacks, model hijacking, or adversarial attacks so crucial. Regularly updating security protocols and staying informed about the latest vulnerabilities helps ensure that AI systems remain resilient against evolving threats. ***Tip:*** *Join an AI-specific threat intelligence community or mailing list. These sources often flag new model vulnerabilities, proof-of-concept exploits, and threat actor tactics long before they show up in broader security feeds.* | ***Further reading:** [What Is Explainable AI (XAI)?](https://www.paloaltonetworks.com/cyberpedia/explainable-ai)* ![Network of applications icon.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-generative-ai-security/genai-apps-icon.svg) ## See firsthand how to make sure GenAI apps are used safely. [Get a personalized AI Access Security demo](https://start.paloaltonetworks.com/ai-access-contact-us.html) ## GenAI security FAQs #### What is GenAI security? GenAI security involves protecting AI systems and the content they generate from misuse, unauthorized access, and harmful manipulation. It ensures that AI operates as intended and secures data, models, and outputs against evolving risks like data breaches or adversarial attacks. #### What are the risks of GenAI? GenAI risks include data poisoning, prompt injection attacks, model theft, AI-generated code vulnerabilities, and unintended disclosure of sensitive information. Additionally, biases in AI models and unauthorized use of AI tools (shadow AI) can pose significant security and compliance threats. #### What are the risks of GenAI? GenAI risks include data poisoning, prompt injection attacks, model theft, AI-generated code vulnerabilities, and unintended disclosure of sensitive information. Additionally, biases in AI models and unauthorized use of AI tools (shadow AI) can pose significant security and compliance threats. #### Is generative AI safe to use? Generative AI can be safe if managed properly, but without robust security measures, it presents risks such as data breaches, malicious inputs, and exploitation of model vulnerabilities. Implementing secure development practices and continuous monitoring helps mitigate these risks. #### What are the risks of generative AI in banking? In banking, GenAI risks include data breaches, fraud, model manipulation, and the exposure of sensitive financial information. AI models might also introduce biases, impacting decision-making processes, or allow unauthorized access to banking systems through vulnerabilities in AI-powered applications. #### What are the two security risks of generative AI? The two main security risks of generative AI are prompt injection attacks, which manipulate AI outputs, and data poisoning, where attackers alter training data to influence the model's behavior, leading to biased or erroneous outcomes. #### How has GenAI affected security? GenAI has introduced new security challenges by providing advanced tools for attackers, such as automating malicious activities and evading traditional defenses. It has also highlighted the need for enhanced model protection, including securing AI-generated outputs and addressing vulnerabilities in AI systems. Related Content [White paper: AI Security: Navigating the New Frontier of Cyber Defense Find out why categorizing AI security as a standard security control can pose significant risks.](https://www.paloaltonetworks.com/resources/whitepapers/ai-security-navigating-the-new-frontier-of-cyber-defense) [Guide: The C-Suite Guide to GenAI Risk Management Learn a strategic framework for managing the risks associated with GenAI.](https://www.paloaltonetworks.com/resources/guides/the-c-suite-guide-to-genai-risk-management) [Reference architecture: Securing Generative AI Get to know Palo Alto Networks solutions for securing Generative AI (GenAI) applications.](https://www.paloaltonetworks.com/resources/guides/securing-generative-ai-overview) [LIVEcommunity blog: Secure AI by Design Discover a comprehensive GenAI security framework.](https://live.paloaltonetworks.com/t5/community-blogs/genai-security-technical-blog-series-1-6-secure-ai-by-design-a/ba-p/589504) ![Share page on facebook](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/resources/facebook-circular-icon.svg) ![Share page on linkedin](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/resources/linkedin-circular-icon.svg) [![Share page by an email](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/resources/email-circular-icon.svg)](mailto:?subject=What%20Is%20Generative%20AI%20Security%3F%20%5BExplanation%2FStarter%20Guide%5D&body=Generative%20AI%20security%20involves%20safeguarding%20the%20systems%20and%20data%20used%20by%20AI%20technologies%20that%20create%20new%20content.%20at%20https%3A//www.paloaltonetworks.com/cyberpedia/what-is-generative-ai-security) Back to Top {#footer} ## Products and Services * [AI-Powered Network Security Platform](https://www.paloaltonetworks.com/network-security?ts=markdown) * [Secure AI by Design](https://www.paloaltonetworks.com/precision-ai-security/secure-ai-by-design?ts=markdown) * [Prisma AIRS](https://www.paloaltonetworks.com/prisma/prisma-ai-runtime-security?ts=markdown) * [AI Access Security](https://www.paloaltonetworks.com/sase/ai-access-security?ts=markdown) * [Cloud Delivered Security Services](https://www.paloaltonetworks.com/network-security/security-subscriptions?ts=markdown) * [Advanced Threat Prevention](https://www.paloaltonetworks.com/network-security/advanced-threat-prevention?ts=markdown) * [Advanced URL Filtering](https://www.paloaltonetworks.com/network-security/advanced-url-filtering?ts=markdown) * [Advanced WildFire](https://www.paloaltonetworks.com/network-security/advanced-wildfire?ts=markdown) * [Advanced DNS Security](https://www.paloaltonetworks.com/network-security/advanced-dns-security?ts=markdown) * [Enterprise Data Loss Prevention](https://www.paloaltonetworks.com/sase/enterprise-data-loss-prevention?ts=markdown) * [Enterprise IoT Security](https://www.paloaltonetworks.com/network-security/enterprise-device-security?ts=markdown) * [Medical IoT Security](https://www.paloaltonetworks.com/network-security/medical-device-security?ts=markdown) * [Industrial OT Security](https://www.paloaltonetworks.com/network-security/medical-device-security?ts=markdown) * [SaaS Security](https://www.paloaltonetworks.com/sase/saas-security?ts=markdown) * [Next-Generation Firewalls](https://www.paloaltonetworks.com/network-security/next-generation-firewall?ts=markdown) * [Hardware Firewalls](https://www.paloaltonetworks.com/network-security/hardware-firewall-innovations?ts=markdown) * [Software Firewalls](https://www.paloaltonetworks.com/network-security/software-firewalls?ts=markdown) * [Strata Cloud Manager](https://www.paloaltonetworks.com/network-security/strata-cloud-manager?ts=markdown) * [SD-WAN for NGFW](https://www.paloaltonetworks.com/network-security/sd-wan-subscription?ts=markdown) * [PAN-OS](https://www.paloaltonetworks.com/network-security/pan-os?ts=markdown) * [Panorama](https://www.paloaltonetworks.com/network-security/panorama?ts=markdown) * [Secure Access Service Edge](https://www.paloaltonetworks.com/sase?ts=markdown) * [Prisma SASE](https://www.paloaltonetworks.com/sase?ts=markdown) * [Application Acceleration](https://www.paloaltonetworks.com/sase/app-acceleration?ts=markdown) * [Autonomous Digital Experience Management](https://www.paloaltonetworks.com/sase/adem?ts=markdown) * [Enterprise DLP](https://www.paloaltonetworks.com/sase/enterprise-data-loss-prevention?ts=markdown) * [Prisma Access](https://www.paloaltonetworks.com/sase/access?ts=markdown) * [Prisma Browser](https://www.paloaltonetworks.com/sase/prisma-browser?ts=markdown) * [Prisma SD-WAN](https://www.paloaltonetworks.com/sase/sd-wan?ts=markdown) * [Remote Browser Isolation](https://www.paloaltonetworks.com/sase/remote-browser-isolation?ts=markdown) * [SaaS Security](https://www.paloaltonetworks.com/sase/saas-security?ts=markdown) * [AI-Driven Security Operations Platform](https://www.paloaltonetworks.com/cortex?ts=markdown) * [Cloud Security](https://www.paloaltonetworks.com/cortex/cloud?ts=markdown) * [Cortex Cloud](https://www.paloaltonetworks.com/cortex/cloud?ts=markdown) * [Application Security](https://www.paloaltonetworks.com/cortex/cloud/application-security?ts=markdown) * [Cloud Posture Security](https://www.paloaltonetworks.com/cortex/cloud/cloud-posture-security?ts=markdown) * [Cloud Runtime Security](https://www.paloaltonetworks.com/cortex/cloud/runtime-security?ts=markdown) * [Prisma Cloud](https://www.paloaltonetworks.com/prisma/cloud?ts=markdown) * [AI-Driven SOC](https://www.paloaltonetworks.com/cortex?ts=markdown) * [Cortex XSIAM](https://www.paloaltonetworks.com/cortex/cortex-xsiam?ts=markdown) * [Cortex XDR](https://www.paloaltonetworks.com/cortex/cortex-xdr?ts=markdown) * [Cortex XSOAR](https://www.paloaltonetworks.com/cortex/cortex-xsoar?ts=markdown) * [Cortex Xpanse](https://www.paloaltonetworks.com/cortex/cortex-xpanse?ts=markdown) * [Unit 42 Managed Detection \& Response](https://www.paloaltonetworks.com/cortex/managed-detection-and-response?ts=markdown) * [Managed XSIAM](https://www.paloaltonetworks.com/cortex/managed-xsiam?ts=markdown) * [Threat Intel and Incident Response Services](https://www.paloaltonetworks.com/unit42?ts=markdown) * [Proactive Assessments](https://www.paloaltonetworks.com/unit42/assess?ts=markdown) * [Incident Response](https://www.paloaltonetworks.com/unit42/respond?ts=markdown) * [Transform Your Security Strategy](https://www.paloaltonetworks.com/unit42/transform?ts=markdown) * [Discover Threat Intelligence](https://www.paloaltonetworks.com/unit42/threat-intelligence-partners?ts=markdown) ## Company * [About Us](https://www.paloaltonetworks.com/about-us?ts=markdown) * [Careers](https://jobs.paloaltonetworks.com/en/) * [Contact Us](https://www.paloaltonetworks.com/company/contact-sales?ts=markdown) * [Corporate Responsibility](https://www.paloaltonetworks.com/about-us/corporate-responsibility?ts=markdown) * [Customers](https://www.paloaltonetworks.com/customers?ts=markdown) * [Investor Relations](https://investors.paloaltonetworks.com/) * [Location](https://www.paloaltonetworks.com/about-us/locations?ts=markdown) * [Newsroom](https://www.paloaltonetworks.com/company/newsroom?ts=markdown) ## Popular Links * [Blog](https://www.paloaltonetworks.com/blog/?ts=markdown) * [Communities](https://www.paloaltonetworks.com/communities?ts=markdown) * [Content Library](https://www.paloaltonetworks.com/resources?ts=markdown) * [Cyberpedia](https://www.paloaltonetworks.com/cyberpedia?ts=markdown) * [Event Center](https://events.paloaltonetworks.com/) * [Manage Email Preferences](https://start.paloaltonetworks.com/preference-center) * [Products A-Z](https://www.paloaltonetworks.com/products/products-a-z?ts=markdown) * [Product Certifications](https://www.paloaltonetworks.com/legal-notices/trust-center/compliance?ts=markdown) * [Report a Vulnerability](https://www.paloaltonetworks.com/security-disclosure?ts=markdown) * [Sitemap](https://www.paloaltonetworks.com/sitemap?ts=markdown) * [Tech Docs](https://docs.paloaltonetworks.com/) * [Unit 42](https://unit42.paloaltonetworks.com/) * [Do Not Sell or Share My Personal Information](https://panwedd.exterro.net/portal/dsar.htm?target=panwedd) ![PAN logo](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/pan-logo-dark.svg) * [Privacy](https://www.paloaltonetworks.com/legal-notices/privacy?ts=markdown) * [Trust Center](https://www.paloaltonetworks.com/legal-notices/trust-center?ts=markdown) * [Terms of Use](https://www.paloaltonetworks.com/legal-notices/terms-of-use?ts=markdown) * [Documents](https://www.paloaltonetworks.com/legal?ts=markdown) Copyright © 2026 Palo Alto Networks. All Rights Reserved * [![Youtube](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/social/youtube-black.svg)](https://www.youtube.com/user/paloaltonetworks) * [![Podcast](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/icons/podcast.svg)](https://www.paloaltonetworks.com/podcasts/threat-vector?ts=markdown) * [![Facebook](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/social/facebook-black.svg)](https://www.facebook.com/PaloAltoNetworks/) * [![LinkedIn](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/social/linkedin-black.svg)](https://www.linkedin.com/company/palo-alto-networks) * [![Twitter](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/social/twitter-x-black.svg)](https://twitter.com/PaloAltoNtwks) * EN Select your language