[](https://www.paloaltonetworks.com/?ts=markdown) * Sign In * Customer * Partner * Employee * [Login to download](https://www.paloaltonetworks.com/login?ts=markdown) * [Join us to become a member](https://www.paloaltonetworks.com/login?screenToRender=traditionalRegistration&ts=markdown) * EN * [USA (ENGLISH)](https://www.paloaltonetworks.com) * [AUSTRALIA (ENGLISH)](https://www.paloaltonetworks.com.au) * [BRAZIL (PORTUGUÉS)](https://www.paloaltonetworks.com.br) * [CANADA (ENGLISH)](https://www.paloaltonetworks.ca) * [CHINA (简体中文)](https://www.paloaltonetworks.cn) * [FRANCE (FRANÇAIS)](https://www.paloaltonetworks.fr) * [GERMANY (DEUTSCH)](https://www.paloaltonetworks.de) * [INDIA (ENGLISH)](https://www.paloaltonetworks.in) * [ITALY (ITALIANO)](https://www.paloaltonetworks.it) * [JAPAN (日本語)](https://www.paloaltonetworks.jp) * [KOREA (한국어)](https://www.paloaltonetworks.co.kr) * [LATIN AMERICA (ESPAÑOL)](https://www.paloaltonetworks.lat) * [MEXICO (ESPAÑOL)](https://www.paloaltonetworks.com.mx) * [SINGAPORE (ENGLISH)](https://www.paloaltonetworks.sg) * [SPAIN (ESPAÑOL)](https://www.paloaltonetworks.es) * [TAIWAN (繁體中文)](https://www.paloaltonetworks.tw) * [UK (ENGLISH)](https://www.paloaltonetworks.co.uk) * ![magnifying glass search icon to open search field](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/search-black.svg) * [Contact Us](https://www.paloaltonetworks.com/company/contact-sales?ts=markdown) * [What's New](https://www.paloaltonetworks.com/resources?ts=markdown) * [Get Support](https://support.paloaltonetworks.com/SupportAccount/MyAccount) * [Under Attack?](https://start.paloaltonetworks.com/contact-unit42.html) ![x close icon to close mobile navigation](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/x-black.svg) [![Palo Alto Networks logo](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/pan-logo-dark.svg)](https://www.paloaltonetworks.com/?ts=markdown) ![magnifying glass search icon to open search field](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/search-black.svg) * [](https://www.paloaltonetworks.com/?ts=markdown) * Products ![black arrow pointing left to go back to main navigation](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/arrow-right-black.svg) Products [AI-Powered Network Security Platform](https://www.paloaltonetworks.com/network-security?ts=markdown) * [AI Security](https://www.paloaltonetworks.com/precision-ai-security/secure-ai-by-design?ts=markdown) * [Prisma AIRS](https://www.paloaltonetworks.com/prisma/prisma-ai-runtime-security?ts=markdown) * [AI Access Security](https://www.paloaltonetworks.com/sase/ai-access-security?ts=markdown) * [Cloud Delivered Security Services](https://www.paloaltonetworks.com/network-security/security-subscriptions?ts=markdown) * [Advanced Threat Prevention](https://www.paloaltonetworks.com/network-security/advanced-threat-prevention?ts=markdown) * [Advanced URL Filtering](https://www.paloaltonetworks.com/network-security/advanced-url-filtering?ts=markdown) * [Advanced WildFire](https://www.paloaltonetworks.com/network-security/advanced-wildfire?ts=markdown) * [Advanced DNS Security](https://www.paloaltonetworks.com/network-security/advanced-dns-security?ts=markdown) * [Enterprise Data Loss Prevention](https://www.paloaltonetworks.com/sase/enterprise-data-loss-prevention?ts=markdown) * [Enterprise Device Security](https://www.paloaltonetworks.com/network-security/enterprise-device-security?ts=markdown) * [Medical Device Security](https://www.paloaltonetworks.com/network-security/medical-device-security?ts=markdown) * [OT Device Security](https://www.paloaltonetworks.com/network-security/ot-device-security?ts=markdown) * [SaaS Security](https://www.paloaltonetworks.com/sase/saas-security?ts=markdown) * [Next-Generation Firewalls](https://www.paloaltonetworks.com/network-security/next-generation-firewall?ts=markdown) * [Hardware Firewalls](https://www.paloaltonetworks.com/network-security/hardware-firewall-innovations?ts=markdown) * [Software Firewalls](https://www.paloaltonetworks.com/network-security/software-firewalls?ts=markdown) * [Strata Cloud Manager](https://www.paloaltonetworks.com/network-security/strata-cloud-manager?ts=markdown) * [SD-WAN for NGFW](https://www.paloaltonetworks.com/network-security/sd-wan-subscription?ts=markdown) * [PAN-OS](https://www.paloaltonetworks.com/network-security/pan-os?ts=markdown) * [Panorama](https://www.paloaltonetworks.com/network-security/panorama?ts=markdown) * [Secure Access Service Edge](https://www.paloaltonetworks.com/sase?ts=markdown) * [Prisma SASE](https://www.paloaltonetworks.com/sase?ts=markdown) * [Application Acceleration](https://www.paloaltonetworks.com/sase/app-acceleration?ts=markdown) * [Autonomous Digital Experience Management](https://www.paloaltonetworks.com/sase/adem?ts=markdown) * [Enterprise DLP](https://www.paloaltonetworks.com/sase/enterprise-data-loss-prevention?ts=markdown) * [Prisma Access](https://www.paloaltonetworks.com/sase/access?ts=markdown) * [Prisma Browser](https://www.paloaltonetworks.com/sase/prisma-browser?ts=markdown) * [Prisma SD-WAN](https://www.paloaltonetworks.com/sase/sd-wan?ts=markdown) * [Remote Browser Isolation](https://www.paloaltonetworks.com/sase/remote-browser-isolation?ts=markdown) * [SaaS Security](https://www.paloaltonetworks.com/sase/saas-security?ts=markdown) [AI-Driven Security Operations Platform](https://www.paloaltonetworks.com/cortex?ts=markdown) * [Cloud Security](https://www.paloaltonetworks.com/cortex/cloud?ts=markdown) * [Cortex Cloud](https://www.paloaltonetworks.com/cortex/cloud?ts=markdown) * [Application Security](https://www.paloaltonetworks.com/cortex/cloud/application-security?ts=markdown) * [Cloud Posture Security](https://www.paloaltonetworks.com/cortex/cloud/cloud-posture-security?ts=markdown) * [Cloud Runtime Security](https://www.paloaltonetworks.com/cortex/cloud/runtime-security?ts=markdown) * [Prisma Cloud](https://www.paloaltonetworks.com/prisma/cloud?ts=markdown) * [AI-Driven SOC](https://www.paloaltonetworks.com/cortex?ts=markdown) * [Cortex XSIAM](https://www.paloaltonetworks.com/cortex/cortex-xsiam?ts=markdown) * [Cortex XDR](https://www.paloaltonetworks.com/cortex/cortex-xdr?ts=markdown) * [Cortex AgentiX](https://www.paloaltonetworks.com/cortex/agentix?ts=markdown) * [Cortex XSOAR](https://www.paloaltonetworks.com/cortex/cortex-xsoar?ts=markdown) * [Cortex Exposure Management](https://www.paloaltonetworks.com/cortex/exposure-management?ts=markdown) * [Cortex Xpanse](https://www.paloaltonetworks.com/cortex/cortex-xpanse?ts=markdown) * [Cortex Advanced Email Security](https://www.paloaltonetworks.com/cortex/advanced-email-security?ts=markdown) * [Unit 42 Managed Detection \& Response](https://www.paloaltonetworks.com/cortex/managed-detection-and-response?ts=markdown) * [Unit 42 Managed XSIAM](https://www.paloaltonetworks.com/cortex/managed-xsiam?ts=markdown) * Solutions ![black arrow pointing left to go back to main navigation](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/arrow-right-black.svg) Solutions Secure AI by Design * [Secure AI Ecosystem](https://www.paloaltonetworks.com/prisma/prisma-ai-runtime-security?ts=markdown) * [Secure GenAI Usage](https://www.paloaltonetworks.com/sase/ai-access-security?ts=markdown) Network Security * [Cloud Network Security](https://www.paloaltonetworks.com/network-security/software-firewalls?ts=markdown) * [Data Center Security](https://www.paloaltonetworks.com/network-security/data-center?ts=markdown) * [DNS Security](https://www.paloaltonetworks.com/network-security/advanced-dns-security?ts=markdown) * [Intrusion Detection and Prevention](https://www.paloaltonetworks.com/network-security/advanced-threat-prevention?ts=markdown) * [Device Security](https://www.paloaltonetworks.com/network-security/device-security?ts=markdown) * [OT Security](https://www.paloaltonetworks.com/network-security/ot-security-solution?ts=markdown) * [5G Security](https://www.paloaltonetworks.com/network-security/5g-security?ts=markdown) * [Secure All Apps, Users and Locations](https://www.paloaltonetworks.com/sase/secure-users-data-apps-devices?ts=markdown) * [Secure Branch Transformation](https://www.paloaltonetworks.com/sase/secure-branch-transformation?ts=markdown) * [Secure Work on Any Device](https://www.paloaltonetworks.com/sase/secure-work-on-any-device?ts=markdown) * [VPN Replacement](https://www.paloaltonetworks.com/sase/vpn-replacement-for-secure-remote-access?ts=markdown) * [Web \& Phishing Security](https://www.paloaltonetworks.com/network-security/advanced-url-filtering?ts=markdown) Cloud Security * [Application Security Posture Management (ASPM)](https://www.paloaltonetworks.com/cortex/cloud/application-security-posture-management?ts=markdown) * [Software Supply Chain Security](https://www.paloaltonetworks.com/cortex/cloud/software-supply-chain-security?ts=markdown) * [Code Security](https://www.paloaltonetworks.com/cortex/cloud/code-security?ts=markdown) * [Cloud Security Posture Management (CSPM)](https://www.paloaltonetworks.com/cortex/cloud/cloud-security-posture-management?ts=markdown) * [Cloud Infrastructure Entitlement Management (CIEM)](https://www.paloaltonetworks.com/cortex/cloud/cloud-infrastructure-entitlement-management?ts=markdown) * [Data Security Posture Management (DSPM)](https://www.paloaltonetworks.com/cortex/cloud/data-security-posture-management?ts=markdown) * [AI Security Posture Management (AI-SPM)](https://www.paloaltonetworks.com/cortex/cloud/ai-security-posture-management?ts=markdown) * [Cloud Detection \& Response](https://www.paloaltonetworks.com/cortex/cloud-detection-and-response?ts=markdown) * [Cloud Workload Protection (CWP)](https://www.paloaltonetworks.com/cortex/cloud/cloud-workload-protection?ts=markdown) * [Web Application \& API Security (WAAS)](https://www.paloaltonetworks.com/cortex/cloud/web-app-api-security?ts=markdown) Security Operations * [Cloud Detection \& Response](https://www.paloaltonetworks.com/cortex/cloud-detection-and-response?ts=markdown) * [Security Information and Event Management](https://www.paloaltonetworks.com/cortex/modernize-siem?ts=markdown) * [Network Security Automation](https://www.paloaltonetworks.com/cortex/network-security-automation?ts=markdown) * [Incident Case Management](https://www.paloaltonetworks.com/cortex/incident-case-management?ts=markdown) * [SOC Automation](https://www.paloaltonetworks.com/cortex/security-operations-automation?ts=markdown) * [Threat Intel Management](https://www.paloaltonetworks.com/cortex/threat-intel-management?ts=markdown) * [Managed Detection \& Response](https://www.paloaltonetworks.com/cortex/managed-detection-and-response?ts=markdown) * [Attack Surface Management](https://www.paloaltonetworks.com/cortex/cortex-xpanse/attack-surface-management?ts=markdown) * [Compliance Management](https://www.paloaltonetworks.com/cortex/cortex-xpanse/compliance-management?ts=markdown) * [Internet Operations Management](https://www.paloaltonetworks.com/cortex/cortex-xpanse/internet-operations-management?ts=markdown) * [Extended Data Lake (XDL)](https://www.paloaltonetworks.com/cortex/cortex-xdl?ts=markdown) * [Agentic Assistant](https://www.paloaltonetworks.com/cortex/cortex-agentic-assistant?ts=markdown) Endpoint Security * [Endpoint Protection](https://www.paloaltonetworks.com/cortex/endpoint-protection?ts=markdown) * [Extended Detection \& Response](https://www.paloaltonetworks.com/cortex/detection-and-response?ts=markdown) * [Ransomware Protection](https://www.paloaltonetworks.com/cortex/ransomware-protection?ts=markdown) * [Digital Forensics](https://www.paloaltonetworks.com/cortex/digital-forensics?ts=markdown) [Industries](https://www.paloaltonetworks.com/industry?ts=markdown) * [Public Sector](https://www.paloaltonetworks.com/industry/public-sector?ts=markdown) * [Financial Services](https://www.paloaltonetworks.com/industry/financial-services?ts=markdown) * [Manufacturing](https://www.paloaltonetworks.com/industry/manufacturing?ts=markdown) * [Healthcare](https://www.paloaltonetworks.com/industry/healthcare?ts=markdown) * [Small \& Medium Business Solutions](https://www.paloaltonetworks.com/industry/small-medium-business-portfolio?ts=markdown) * Services ![black arrow pointing left to go back to main navigation](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/arrow-right-black.svg) Services [Threat Intel and Incident Response Services](https://www.paloaltonetworks.com/unit42?ts=markdown) * [Assess](https://www.paloaltonetworks.com/unit42/assess?ts=markdown) * [AI Security Assessment](https://www.paloaltonetworks.com/unit42/assess/ai-security-assessment?ts=markdown) * [Attack Surface Assessment](https://www.paloaltonetworks.com/unit42/assess/attack-surface-assessment?ts=markdown) * [Breach Readiness Review](https://www.paloaltonetworks.com/unit42/assess/breach-readiness-review?ts=markdown) * [BEC Readiness Assessment](https://www.paloaltonetworks.com/bec-readiness-assessment?ts=markdown) * [Cloud Security Assessment](https://www.paloaltonetworks.com/unit42/assess/cloud-security-assessment?ts=markdown) * [Compromise Assessment](https://www.paloaltonetworks.com/unit42/assess/compromise-assessment?ts=markdown) * [Cyber Risk Assessment](https://www.paloaltonetworks.com/unit42/assess/cyber-risk-assessment?ts=markdown) * [M\&A Cyber Due Diligence](https://www.paloaltonetworks.com/unit42/assess/mergers-acquisitions-cyber-due-diligence?ts=markdown) * [Penetration Testing](https://www.paloaltonetworks.com/unit42/assess/penetration-testing?ts=markdown) * [Purple Team Exercises](https://www.paloaltonetworks.com/unit42/assess/purple-teaming?ts=markdown) * [Ransomware Readiness Assessment](https://www.paloaltonetworks.com/unit42/assess/ransomware-readiness-assessment?ts=markdown) * [SOC Assessment](https://www.paloaltonetworks.com/unit42/assess/soc-assessment?ts=markdown) * [Supply Chain Risk Assessment](https://www.paloaltonetworks.com/unit42/assess/supply-chain-risk-assessment?ts=markdown) * [Tabletop Exercises](https://www.paloaltonetworks.com/unit42/assess/tabletop-exercise?ts=markdown) * [Unit 42 Retainer](https://www.paloaltonetworks.com/unit42/retainer?ts=markdown) * [Respond](https://www.paloaltonetworks.com/unit42/respond?ts=markdown) * [Cloud Incident Response](https://www.paloaltonetworks.com/unit42/respond/cloud-incident-response?ts=markdown) * [Digital Forensics](https://www.paloaltonetworks.com/unit42/respond/digital-forensics?ts=markdown) * [Incident Response](https://www.paloaltonetworks.com/unit42/respond/incident-response?ts=markdown) * [Managed Detection and Response](https://www.paloaltonetworks.com/unit42/respond/managed-detection-response?ts=markdown) * [Managed Threat Hunting](https://www.paloaltonetworks.com/unit42/respond/managed-threat-hunting?ts=markdown) * [Managed XSIAM](https://www.paloaltonetworks.com/cortex/managed-xsiam?ts=markdown) * [Unit 42 Retainer](https://www.paloaltonetworks.com/unit42/retainer?ts=markdown) * [Transform](https://www.paloaltonetworks.com/unit42/transform?ts=markdown) * [IR Plan Development and Review](https://www.paloaltonetworks.com/unit42/transform/incident-response-plan-development-review?ts=markdown) * [Security Program Design](https://www.paloaltonetworks.com/unit42/transform/security-program-design?ts=markdown) * [Virtual CISO](https://www.paloaltonetworks.com/unit42/transform/vciso?ts=markdown) * [Zero Trust Advisory](https://www.paloaltonetworks.com/unit42/transform/zero-trust-advisory?ts=markdown) [Global Customer Services](https://www.paloaltonetworks.com/services?ts=markdown) * [Education \& Training](https://www.paloaltonetworks.com/services/education?ts=markdown) * [Professional Services](https://www.paloaltonetworks.com/services/consulting?ts=markdown) * [Success Tools](https://www.paloaltonetworks.com/services/customer-success-tools?ts=markdown) * [Support Services](https://www.paloaltonetworks.com/services/solution-assurance?ts=markdown) * [Customer Success](https://www.paloaltonetworks.com/services/customer-success?ts=markdown) [![](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/logo-unit-42.svg) UNIT 42 RETAINER Custom-built to fit your organization's needs, you can choose to allocate your retainer hours to any of our offerings, including proactive cyber risk management services. Learn how you can put the world-class Unit 42 Incident Response team on speed dial. Learn more](https://www.paloaltonetworks.com/unit42/retainer?ts=markdown) * Partners ![black arrow pointing left to go back to main navigation](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/arrow-right-black.svg) Partners NextWave Partners * [NextWave Partner Community](https://www.paloaltonetworks.com/partners?ts=markdown) * [Cloud Service Providers](https://www.paloaltonetworks.com/partners/nextwave-for-csp?ts=markdown) * [Global Systems Integrators](https://www.paloaltonetworks.com/partners/nextwave-for-gsi?ts=markdown) * [Technology Partners](https://www.paloaltonetworks.com/partners/technology-partners?ts=markdown) * [Service Providers](https://www.paloaltonetworks.com/partners/service-providers?ts=markdown) * [Solution Providers](https://www.paloaltonetworks.com/partners/nextwave-solution-providers?ts=markdown) * [Managed Security Service Providers](https://www.paloaltonetworks.com/partners/managed-security-service-providers?ts=markdown) * [XMDR Partners](https://www.paloaltonetworks.com/partners/managed-security-service-providers/xmdr?ts=markdown) Take Action * [Portal Login](https://www.paloaltonetworks.com/partners/nextwave-partner-portal?ts=markdown) * [Managed Services Program](https://www.paloaltonetworks.com/partners/managed-security-services-provider-program?ts=markdown) * [Become a Partner](https://paloaltonetworks.my.site.com/NextWavePartnerProgram/s/partnerregistration?type=becomepartner) * [Request Access](https://paloaltonetworks.my.site.com/NextWavePartnerProgram/s/partnerregistration?type=requestaccess) * [Find a Partner](https://paloaltonetworks.my.site.com/NextWavePartnerProgram/s/partnerlocator) [CYBERFORCE CYBERFORCE represents the top 1% of partner engineers trusted for their security expertise. Learn more](https://www.paloaltonetworks.com/cyberforce?ts=markdown) * Company ![black arrow pointing left to go back to main navigation](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/arrow-right-black.svg) Company Palo Alto Networks * [About Us](https://www.paloaltonetworks.com/about-us?ts=markdown) * [Management Team](https://www.paloaltonetworks.com/about-us/management?ts=markdown) * [Investor Relations](https://investors.paloaltonetworks.com) * [Locations](https://www.paloaltonetworks.com/about-us/locations?ts=markdown) * [Ethics \& Compliance](https://www.paloaltonetworks.com/company/ethics-and-compliance?ts=markdown) * [Corporate Responsibility](https://www.paloaltonetworks.com/about-us/corporate-responsibility?ts=markdown) * [Military \& Veterans](https://jobs.paloaltonetworks.com/military) [Why Palo Alto Networks?](https://www.paloaltonetworks.com/why-paloaltonetworks?ts=markdown) * [Precision AI Security](https://www.paloaltonetworks.com/precision-ai-security?ts=markdown) * [Our Platform Approach](https://www.paloaltonetworks.com/why-paloaltonetworks/platformization?ts=markdown) * [Accelerate Your Cybersecurity Transformation](https://www.paloaltonetworks.com/why-paloaltonetworks/nam-cxo-portfolio?ts=markdown) * [Awards \& Recognition](https://www.paloaltonetworks.com/about-us/awards?ts=markdown) * [Customer Stories](https://www.paloaltonetworks.com/customers?ts=markdown) * [Global Certifications](https://www.paloaltonetworks.com/legal-notices/trust-center/compliance?ts=markdown) * [Trust 360 Program](https://www.paloaltonetworks.com/resources/whitepapers/trust-360?ts=markdown) Careers * [Overview](https://jobs.paloaltonetworks.com/) * [Culture \& Benefits](https://jobs.paloaltonetworks.com/en/culture/) [A Newsweek Most Loved Workplace "Businesses that do right by their employees" Read more](https://www.paloaltonetworks.com/company/press/2021/palo-alto-networks-secures-top-ranking-on-newsweek-s-most-loved-workplaces-list-for-2021?ts=markdown) * More ![black arrow pointing left to go back to main navigation](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/arrow-right-black.svg) More Resources * [Blog](https://www.paloaltonetworks.com/blog/?ts=markdown) * [Unit 42 Threat Research](https://unit42.paloaltonetworks.com/) * [Communities](https://www.paloaltonetworks.com/communities?ts=markdown) * [Content Library](https://www.paloaltonetworks.com/resources?ts=markdown) * [Cyberpedia](https://www.paloaltonetworks.com/cyberpedia?ts=markdown) * [Tech Insider](https://techinsider.paloaltonetworks.com/) * [Knowledge Base](https://knowledgebase.paloaltonetworks.com/) * [Palo Alto Networks TV](https://tv.paloaltonetworks.com/) * [Perspectives of Leaders](https://www.paloaltonetworks.com/perspectives/?ts=markdown) * [Cyber Perspectives Magazine](https://www.paloaltonetworks.com/cybersecurity-perspectives/cyber-perspectives-magazine?ts=markdown) * [Regional Cloud Locations](https://www.paloaltonetworks.com/products/regional-cloud-locations?ts=markdown) * [Tech Docs](https://docs.paloaltonetworks.com/) * [Security Posture Assessment](https://www.paloaltonetworks.com/security-posture-assessment?ts=markdown) * [Threat Vector Podcast](https://unit42.paloaltonetworks.com/unit-42-threat-vector-podcast/) * [Packet Pushers Podcasts](https://www.paloaltonetworks.com/podcasts/packet-pusher?ts=markdown) Connect * [LIVE community](https://live.paloaltonetworks.com/) * [Events](https://events.paloaltonetworks.com/) * [Executive Briefing Center](https://www.paloaltonetworks.com/about-us/executive-briefing-program?ts=markdown) * [Demos](https://www.paloaltonetworks.com/demos?ts=markdown) * [Contact us](https://www.paloaltonetworks.com/company/contact-sales?ts=markdown) [Blog Stay up-to-date on industry trends and the latest innovations from the world's largest cybersecurity Learn more](https://www.paloaltonetworks.com/blog/) * Sign In ![black arrow pointing left to go back to main navigation](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/arrow-right-black.svg) Sign In * Customer * Partner * Employee * [Login to download](https://www.paloaltonetworks.com/login?ts=markdown) * [Join us to become a member](https://www.paloaltonetworks.com/login?screenToRender=traditionalRegistration&ts=markdown) * EN ![black arrow pointing left to go back to main navigation](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/arrow-right-black.svg) Language * [USA (ENGLISH)](https://www.paloaltonetworks.com) * [AUSTRALIA (ENGLISH)](https://www.paloaltonetworks.com.au) * [BRAZIL (PORTUGUÉS)](https://www.paloaltonetworks.com.br) * [CANADA (ENGLISH)](https://www.paloaltonetworks.ca) * [CHINA (简体中文)](https://www.paloaltonetworks.cn) * [FRANCE (FRANÇAIS)](https://www.paloaltonetworks.fr) * [GERMANY (DEUTSCH)](https://www.paloaltonetworks.de) * [INDIA (ENGLISH)](https://www.paloaltonetworks.in) * [ITALY (ITALIANO)](https://www.paloaltonetworks.it) * [JAPAN (日本語)](https://www.paloaltonetworks.jp) * [KOREA (한국어)](https://www.paloaltonetworks.co.kr) * [LATIN AMERICA (ESPAÑOL)](https://www.paloaltonetworks.lat) * [MEXICO (ESPAÑOL)](https://www.paloaltonetworks.com.mx) * [SINGAPORE (ENGLISH)](https://www.paloaltonetworks.sg) * [SPAIN (ESPAÑOL)](https://www.paloaltonetworks.es) * [TAIWAN (繁體中文)](https://www.paloaltonetworks.tw) * [UK (ENGLISH)](https://www.paloaltonetworks.co.uk) * [Contact Us](https://www.paloaltonetworks.com/company/contact-sales?ts=markdown) * [What's New](https://www.paloaltonetworks.com/resources?ts=markdown) * [Get support](https://support.paloaltonetworks.com/SupportAccount/MyAccount) * [Under Attack?](https://start.paloaltonetworks.com/contact-unit42.html) * [Demos and Trials](https://www.paloaltonetworks.com/get-started?ts=markdown) Search All * [Tech Docs](https://docs.paloaltonetworks.com/search) Close search modal [Deploy Bravely --- Secure your AI transformation with Prisma AIRS](https://www.paloaltonetworks.com/deploybravely?ts=markdown) [](https://www.paloaltonetworks.com/?ts=markdown) 1. [Cyberpedia](https://www.paloaltonetworks.com/cyberpedia?ts=markdown) 2. [What Is Responsible AI? Principles, Pitfalls, \& How-tos](https://www.paloaltonetworks.com/cyberpedia/what-is-responsible-ai?ts=markdown) Table of contents * [What does the industry actually mean by 'responsible AI'?](#what-does-the-industry-actually-mean-by-responsible-ai) * [What's driving the focus on responsible AI today?](#what-is-driving-the-focus-on-responsible-ai-today) * [The 6 core principles of responsible AI](#the-6-core-principles-of-responsible-ai) * [Why do so many responsible AI efforts fail in practice?](#why-do-so-many-responsible-ai-efforts-fail-in-practice) * [How to implement responsible AI in the real world](#how-to-implement-responsible-ai-in-the-world) * [What frameworks and standards guide responsible AI?](#what-frameworks-and-standards-guide-responsible-ai) * [What's different about responsible AI for GenAI?](#what-is-different-about-responsible-ai-for-genai) * [Responsible AI FAQs](#responsible-ai-faqs) # What Is Responsible AI? Principles, Pitfalls, \& How-tos 6 min. read Table of contents * [What does the industry actually mean by 'responsible AI'?](#what-does-the-industry-actually-mean-by-responsible-ai) * [What's driving the focus on responsible AI today?](#what-is-driving-the-focus-on-responsible-ai-today) * [The 6 core principles of responsible AI](#the-6-core-principles-of-responsible-ai) * [Why do so many responsible AI efforts fail in practice?](#why-do-so-many-responsible-ai-efforts-fail-in-practice) * [How to implement responsible AI in the real world](#how-to-implement-responsible-ai-in-the-world) * [What frameworks and standards guide responsible AI?](#what-frameworks-and-standards-guide-responsible-ai) * [What's different about responsible AI for GenAI?](#what-is-different-about-responsible-ai-for-genai) * [Responsible AI FAQs](#responsible-ai-faqs) 1. What does the industry actually mean by 'responsible AI'? * [1. What does the industry actually mean by 'responsible AI'?](#what-does-the-industry-actually-mean-by-responsible-ai) * [2. What's driving the focus on responsible AI today?](#what-is-driving-the-focus-on-responsible-ai-today) * [3. The 6 core principles of responsible AI](#the-6-core-principles-of-responsible-ai) * [4. Why do so many responsible AI efforts fail in practice?](#why-do-so-many-responsible-ai-efforts-fail-in-practice) * [5. How to implement responsible AI in the real world](#how-to-implement-responsible-ai-in-the-world) * [6. What frameworks and standards guide responsible AI?](#what-frameworks-and-standards-guide-responsible-ai) * [7. What's different about responsible AI for GenAI?](#what-is-different-about-responsible-ai-for-genai) * [8. Responsible AI FAQs](#responsible-ai-faqs) Responsible AI is the discipline of designing, developing, and deploying AI systems in ways that are lawful, safe, and aligned with human values. It involves setting clear goals, managing risks, and documenting how systems are used. That includes processes for oversight, accountability, and continuous improvement. ## What does the industry actually mean by 'responsible AI'? The phrase responsible AI gets used a lot. But in practice, the term still gets applied inconsistently. Some use it to describe high-level ethical values. Others treat it like a checklist for compliance. And some use it interchangeably with concepts like trustworthy AI or AI safety. That's a problem. Because without a clear definition, it's hard to build a real program around it. At its core, responsible AI refers to how AI systems are governed so they behave safely, lawfully, and accountably in the real world. It's about managing risk, ensuring oversight, and making sure the system does what it's supposed to do without causing harm. ![The left side contains a rounded white panel with a document icon and bold heading 'The official meaning,' followed by text explaining responsible AI as governance of how systems are built and used. Below are pill-shaped example labels such as deployment reviews, impact assessments, risk tiering, escalation protocols, and monitoring and logging. On the right, three peach-colored boxes list common misuses with bold headers: Ethical AI as responsible AI, checklist compliance as responsible AI, and AI safety as responsible AI. Each box includes short explanatory text and small pill-shaped example labels.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-responsible-ai/What-responsible-AI-actually-means.png "The left side contains a rounded white panel with a document icon and bold heading 'The official meaning,' followed by text explaining responsible AI as governance of how systems are built and used. Below are pill-shaped example labels such as deployment reviews, impact assessments, risk tiering, escalation protocols, and monitoring and logging. On the right, three peach-colored boxes list common misuses with bold headers: Ethical AI as responsible AI, checklist compliance as responsible AI, and AI safety as responsible AI. Each box includes short explanatory text and small pill-shaped example labels.") But the term often gets conflated with three different concepts: * **Responsible AI as governance discipline**: Building structures, controls, and reviews to govern how AI is designed, deployed, and monitored. * **Ethical AI as intent or philosophy**: Centering human values, rights, and societal norms, often without concrete implementation steps. * **AI safety as technical robustness** : Preventing accidents, [adversarial failures](https://www.paloaltonetworks.com/cyberpedia/what-are-adversarial-attacks-on-AI-Machine-Learning), or long-term existential risks, especially in advanced systems. All three are valid areas of concern. But they're not the same. This article focuses on responsible AI as a practical governance discipline: what organizations can do to ensure their AI systems are trustworthy, traceable, and under control throughout their lifecycle. ## What's driving the focus on responsible AI today? "Sustainable adoption of AI necessitates an ecosystem of intentionally designed principles, guidelines and practices -- collectively referred to as "responsible AI" -- to effectively govern the technology for desirable outcomes." [- World Economic Forum, Advancing Responsible AI Innovation: A Playbook](https://www.weforum.org/publications/advancing-responsible-ai-innovation-a-playbook/) AI is no longer a behind-the-scenes tool. It makes decisions, generates content, and interacts directly with people. And when it fails, the consequences aren't hypothetical. Why? Because those failures are already happening. [Hallucinated](https://www.paloaltonetworks.com/cyberpedia/what-are-ai-hallucinations) medical advice. Toxic or misleading content. Job candidates filtered out unfairly. All from AI systems that weren't built---or governed---with enough safeguards. At the same time, generative AI is scaling fast. It's being embedded into search engines, browsers, customer service platforms, and creative workflows. Which means: the stakes are higher. And the room for error is smaller. Not to mention, regulators are watching. So are customers, employees, and internal compliance teams. They want to know how AI decisions are made, who's accountable, and what happens when something goes wrong. This is where responsible AI comes in. And it's important to be clear about what that actually means. ***Note:*** *Responsible AI ≠ marketing compliance. It's not a mission statement. It's an operational discipline focused on managing risk, ensuring oversight, and building AI systems that behave reliably in the real world.* | ***Further reading:** [Black Box AI: Problems, Security Implications, \& Solutions](https://www.paloaltonetworks.com/cyberpedia/black-box-ai)* ## The 6 core principles of responsible AI Responsible AI begins with shared principles. They set the foundation for how AI systems should be developed, deployed, and governed. But these principles aren't just abstract values. They define what trustworthy behavior looks like in real systems and they guide the decisions teams make at every stage of the AI lifecycle. Let's break down each principle and why it matters. ![A hexagonal ring of six colored circles surrounds a central gray AI-and-padlock icon. Each circle contains a white line drawing—scales for fairness, gears for robustness, an eye for transparency, a padlock for privacy, a document and pen for accountability, and a person with a checkmark for human oversight. Lines extend from each icon to short text descriptions placed around the perimeter, forming a radial layout of principles.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-responsible-ai/The-6-core-principles-of-responsible-AI.png "A hexagonal ring of six colored circles surrounds a central gray AI-and-padlock icon. Each circle contains a white line drawing—scales for fairness, gears for robustness, an eye for transparency, a padlock for privacy, a document and pen for accountability, and a person with a checkmark for human oversight. Lines extend from each icon to short text descriptions placed around the perimeter, forming a radial layout of principles.") ### 1. Fairness Fairness means systems should not create discriminatory, exclusionary, or unjust outcomes. Especially across demographic groups or protected categories. This includes how training data is sourced, how models are evaluated, and how edge cases are handled. Without fairness controls, [AI bias](https://www.paloaltonetworks.com/cyberpedia/what-is-ai-bias) can quietly propagate through the system. ### 2. Robustness Robustness means the system behaves reliably. Even when it's under stress, exposed to unusual inputs, or targeted by attackers. Examples include degraded data quality, system failures, and edge conditions. Without robustness, a model that performs well in testing can break down in deployment. ### 3. Transparency Transparency makes the system understandable. That includes explaining how inputs affect outputs, surfacing known limitations, and enabling meaningful review. Without transparency, stakeholders can't evaluate the system's behavior or trust its results. ### 4. Privacy Privacy protects [sensitive data](https://www.paloaltonetworks.com/cyberpedia/sensitive-data) from exposure, misuse, or overretention. That spans from data collection to training pipelines to user logs. Without privacy safeguards, systems can inadvertently leak personal information or violate policy and regulatory expectations. ### 5. Accountability Accountability means someone owns the outcome. Roles, decisions, and risks have to be clearly documented and traceable across the AI lifecycle. Without it, organizations lose control over how AI systems behave and who's responsible when they fail. ### 6. Human oversight Human oversight ensures people remain in control. It includes setting override protocols, defining intervention triggers, and reviewing system performance in context. Without oversight, automation can drift beyond its intended role without anyone noticing. ***Note:*** *These principles aren't always interpreted the same way across frameworks. But they converge in practice when tied to clear lifecycle responsibilities.* Now let's map those principles to lifecycle touchpoints where they need to show up in practice. As you can see in the table below, these principles don't exist in isolation. For example, increasing fairness may require collecting demographic data, raising new privacy risks. | Responsible AI principles across the system lifecycle | |-------------------------------------------------------| | Principle | Lifecycle touchpoints | Example action | |---------------------|---------------------------------|-----------------------------------------------------------------------------| | **Fairness** | Data selection, evaluation | Run bias audits across subgroups. Document known limitations. | | **Robustness** | Testing, deployment, monitoring | Conduct adversarial stress tests. Validate inputs. Monitor for instability. | | **Transparency** | Design, deployment | Publish model documentation. Explain how outputs are generated. | | **Privacy** | Data ingestion, storage, logs | Minimize use of sensitive data. Apply masking or redaction. Log access. | | **Accountability** | All lifecycle stages | Assign owners. Document decisions. Establish clear escalation paths. | | **Human oversight** | Deployment, monitoring | Define override protocols. Track how and when humans intervene. | Which means responsible AI isn't about maximizing any single value. It's about navigating tradeoffs with structure, documentation, and judgment. When principles are grounded in lifecycle actions, they become easier to apply and easier to enforce. ## Why do so many responsible AI efforts fail in practice? Many organizations have launched responsible AI initiatives. Fewer have sustained them. Even fewer have made them work in real systems. In fact, recent research shows that fewer than 1% of companies were assessed at the highest maturity stage for responsible AI. Most are still stuck at the earliest maturity stages with little real governance in place. ![Text on the left defines four stages of responsible AI maturity, each described in short paragraphs. On the right, two vertical stacked bar charts labeled 2024 and 2025 display percentages for stages 1 through 4 using four shades of blue, with the darkest representing stage 4 at the top. The 2024 bar shows 8%, 78%, 14%, and 0%, while the 2025 bar shows 14%, 67%, 19%, and 0%. A legend of four blue circles identifies stages 1–4. A small research citation appears in the bottom corner.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-responsible-ai/Global-responsbile-ai-implementation.png "Text on the left defines four stages of responsible AI maturity, each described in short paragraphs. On the right, two vertical stacked bar charts labeled 2024 and 2025 display percentages for stages 1 through 4 using four shades of blue, with the darkest representing stage 4 at the top. The 2024 bar shows 8%, 78%, 14%, and 0%, while the 2025 bar shows 14%, 67%, 19%, and 0%. A legend of four blue circles identifies stages 1–4. A small research citation appears in the bottom corner.") Why? Because most failures don't come from lack of interest. They come from poor structure. When principles aren't paired with process, oversight fades and nothing sticks. This is especially common in organizations that treat responsible AI as a side effort rather than a formal discipline with defined roles and repeatable controls. You can see the patterns across sectors, industries, and regions. A system gets deployed. A decision is made. Something goes wrong. And there's no clear way to explain what happened, who approved it, or how to prevent it next time. ![A vertical line numbered 1 through 5 runs down the center, with circular markers for each number. On alternating sides, pairs of bold headings and brief explanations list the reasons: implementation inertia, principles with no translation, role confusion, data governance gaps, and fragmented accountability. A sentence at the bottom in italic text notes that these issues compound to weaken AI governance.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-responsible-ai/Top-5-reasons-responsible-AI-programs-fail.png "A vertical line numbered 1 through 5 runs down the center, with circular markers for each number. On alternating sides, pairs of bold headings and brief explanations list the reasons: implementation inertia, principles with no translation, role confusion, data governance gaps, and fragmented accountability. A sentence at the bottom in italic text notes that these issues compound to weaken AI governance.") Here's where responsible AI most often breaks down: * **Implementation inertia** Responsible AI programs often stall after the principles phase. Leadership supports the idea. Teams express interest. But there's no timeline. No path to execution. And no consequences when tasks are missed. Without incentives, enforcement, or escalation paths, the initiative fades into background noise. * **Principles with no operational translation** Many programs publish values like fairness, transparency, or accountability. But they don't define what those values mean for system design, data curation, or model monitoring. Teams are left to interpret the guidance on their own. That leads to inconsistency and gaps in coverage. * **Role confusion** Who's responsible for bias testing? Who owns model documentation? Who approves risk reviews before launch? In many cases, no one knows. Responsibilities are spread across policy, legal, and engineering. But the handoffs are unclear. And when something fails, the accountability trail is hard to follow. * **Data governance gaps** The system depends on data. But the data isn't documented. There's no record of where it came from, how it was modified, or who had access. That makes it harder to explain how a model works or why it produced a given result. It also makes it harder to respond when harm occurs. * **Fragmented accountability** Responsible AI reviews are often disconnected from day-to-day development. The people reviewing risks don't work on the system. The people building the system don't engage with the governance process. As a result, ownership becomes distributed but diluted. And critical gaps go unnoticed. These aren't isolated issues. They tend to compound. One weak link leads to another. And the result is a responsible AI program that exists in principle but never in practice. The next section breaks down how to move from principles to implementation at both the system and program level. ![Icon of a network](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/ai-trism/icon-assessment-1.svg) ## Free AI Risk Assessment Get a complimentary vulnerability assessment of your AI ecosystem. [Claim assessment](https://www.paloaltonetworks.com/network-security/cloud-and-ai-risk-assessment#connect) ## How to implement responsible AI in the real world ![A two-column layout places a tall gray panel on the left titled How mature programs operate, containing three stacked statements about responsible AI integration, continuous oversight, and traceability. To the right, a large horizontal schematic shows system-level tasks at the top—embedding controls, classifying risk, performing impact assessments, ensuring traceability, and building monitoring pipelines—aligned vertically with organizational-level tasks underneath, such as accountability programs, review gates, shared governance, reviewer training, and logging. A dark gray horizontal bar labeled System Level spans the top; a matching bar labeled Organizational Level spans the bottom.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-responsible-ai/How-to-implement-responsible-AI.png "A two-column layout places a tall gray panel on the left titled How mature programs operate, containing three stacked statements about responsible AI integration, continuous oversight, and traceability. To the right, a large horizontal schematic shows system-level tasks at the top—embedding controls, classifying risk, performing impact assessments, ensuring traceability, and building monitoring pipelines—aligned vertically with organizational-level tasks underneath, such as accountability programs, review gates, shared governance, reviewer training, and logging. A dark gray horizontal bar labeled System Level spans the top; a matching bar labeled Organizational Level spans the bottom.") Principles aren't enough. Even the best-intentioned responsible AI programs fall short without clear implementation steps. Success depends on what you build and how you govern it across the full AI lifecycle. There are two main dimensions to focus on: * What your teams do at the project level * And how your organization supports it at the program level Let's start with the system itself. ### System level: Embed controls into the development lifecycle * **Start with use definition.** Be explicit about what the model is for and what it isn't. Don't forget prohibited uses, even if they seem indirect or unlikely. Because deployment context shapes risk. A model optimized for efficiency could end up excluding high-need users without proper constraints. So define the intended purpose, document guardrails, and outline misuse scenarios from the outset. ***Tip:*** *Map misuse scenarios to specific user behaviors, not just technical boundaries.* * **Then classify the risk.** Not every model needs the same level of scrutiny. Some assist humans. Others make high-impact decisions. The risk tier should determine how deep your safeguards go. * **Use a formal impact assessment.** Evaluate stakeholder harms, use context, and system behavior. This won't replace technical testing. But it will guide it. Ask: Who might this system affect? How? Under what conditions? ***Tip:*** *Use impact assessments to flag where safeguards may conflict like fairness vs. privacy.* * **Ensure traceability.** Track data lineage, configuration history, and decision logic. Because when something goes wrong, you'll need to retrace the path. You can't do that without documentation. * **Build monitoring pipelines.** Don't just track performance metrics. Add drift detection, outlier alerts, and escalation triggers. Something needs to alert you when the system starts to behave in ways it shouldn't. And when that happens, have a defined escalation path: name the person responsible. Spell out the triggers. Without that, monitoring becomes passive observation. ***Tip:*** *Define escalation thresholds before launch. Don't wait to invent them under pressure.* ### Organizational level: Build a program around accountability * **Start with review gates.** Don't greenlight model launches without a second set of eyes. Require approval from a responsible AI lead or cross-functional review group based on the system's risk tier. Because risk isn't always obvious to the team building the model. Review adds distance. And distance reveals assumptions. ***Tip:*** *Map misuse scenarios to specific user behaviors, not just technical boundaries.* * **Create shared governance.** Don't let responsible AI sit with a single team. Assign clear roles across AI engineering, legal, product, and compliance. And document the handoffs. Vague ownership is where oversight breaks down. Review adds distance. And distance reveals assumptions. ***Tip:*** *Assign roles along with clear decision and escalation authority.* * **Train your reviewers.** If someone is expected to flag issues, make sure they understand the system. And how it works. Otherwise, the review process becomes a formality. ***Tip:*** *Give reviewers direct access to full model documentation, including configs and decision logic.* * **Log everything.** Not just for audits. But to preserve memory over time. What was reviewed. What was flagged. What was approved. And why. That's how you create continuity. And it's how future decisions get better. ### Evolve from reactive fixes to embedded safeguards Launching a responsible AI program is just the start. To make it sustainable, the practices need to evolve. Instead of reacting to issues after deployment, mature programs build safeguards into system design. Controls are tied to risk tiers. Escalation paths and governance become part of the delivery process. Not side workflows. That's the shift from intention to integration. Where responsible AI isn't just approved. It's applied. ***Tip:*** *Track how long it takes your team to identify and respond to AI issues. Response time is a key maturity signal and an early warning for gaps in oversight.* Frameworks and standards can help structure these practices. The next section outlines the most widely used models---and how they support governance, risk, and implementation across the AI lifecycle. ![Icon of the Prisma AIRS logo on a browser](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/ai-trism/icon-prisma-airs.svg) ## Interactive tour: Prisma AIRS See firsthand how Prisma AIRS implements AI monitoring, red teaming, and governance controls. [Launch tour](https://start.paloaltonetworks.com/prisma-airs-demo.html#bodysec-content-heading) ## What frameworks and standards guide responsible AI? Responsible AI is easier to talk about than to put into practice. Which means organizations need more than principles. They need clear, structured guidance. Today, several well‑established frameworks exist. Each supports a different aspect of responsible AI, from governance and risk to implementation and legal compliance. Here's how they compare: | Comparison of responsible AI frameworks and standards | |-------------------------------------------------------| | Framework / Standard | Issuer | Primary focus | What it adds | |---------------------------------------------------------------------------------------------------------------------|----------------------|---------------------------------|--------------------------------------------------------------------------------------------------------------------| | [ISO/IEC 42001](https://www.iso.org/standard/42001) | ISO/IEC JTC 1/SC 42 | AI management systems | Defines how organizations structure AI governance, roles, policies, and documentation. | | [ISO/IEC 42005](https://www.iso.org/standard/42005) | ISO/IEC JTC 1/SC 42 | AI system impact assessment | Guides teams through system-specific risk reviews, harm identification, and mitigation planning. | | [ISO/IEC 23894](https://www.iso.org/standard/77304.html) | ISO/IEC | AI risk management | Aligns AI risk handling with ISO 31000 and supports structured analysis across the AI lifecycle. | | [NIST AI RMF 1.0](https://www.nist.gov/itl/ai-risk-management-framework) | U.S. NIST | Trust and risk management | Provides practical lifecycle actions across Govern, Map, Measure, and Manage; useful for implementation teams. | | [EU AI Act](https://eur-lex.europa.eu/eli/reg/2024/1689/oj/) | European Commission | Binding regulation | Establishes legal obligations, high-risk system requirements, transparency rules, and conformity assessments. | | [OECD AI Principles](https://oecd.ai/en/ai-principles) | OECD | Global policy baseline | Sets shared expectations for fairness, transparency, robustness, and accountability; influences national policies. | | [UNESCO Recommendation on the Ethics of AI](https://unesdoc.unesco.org/ark:/48223/pf0000381137) | UNESCO | Ethical and governance guidance | Provides globally endorsed standards for rights, oversight, and long-term societal considerations. | | [WEF Responsible AI Playbook](https://www.weforum.org/publications/advancing-responsible-ai-innovation-a-playbook/) | World Economic Forum | Enterprise practice guidance | Offers practical steps for building responsible AI programs and aligning them to business workflows. | Important: These frameworks aren't competing checklists. They cover similar themes but play different roles across governance, risk, implementation, and compliance. Each one supports a different layer of responsible AI. Used individually or in combination, they help translate principles into systems that are actually governed. | ***Further reading:*** * [*What Is AI Governance?*](https://www.paloaltonetworks.com/cyberpedia/ai-governance) * [*AI Risk Management Frameworks: Everything You Need to Know*](https://www.paloaltonetworks.com/cyberpedia/ai-risk-management-framework) * [*NIST AI Risk Management Framework (AI RMF)*](https://www.paloaltonetworks.com/cyberpedia/nist-ai-risk-management-framework) ## What's different about responsible AI for GenAI? ![A large circular cluster of six orange circles surrounds a central gray circle containing a stylized AI icon with an exclamation mark. Each orange circle includes a white line icon and a numbered label, with lines extending outward to short text descriptions. The six risks read: hallucinated content, prompt injection attacks, jailbreaking and misuse, open-ended risk exposure, real-time output filtering required, and dynamic oversight needed. The circular layout resembles spokes around a hub, with short explanatory sentences next to each spoke.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-responsible-ai/How-GenAI-expands-the-responsible-AI-risk-surface.png "A large circular cluster of six orange circles surrounds a central gray circle containing a stylized AI icon with an exclamation mark. Each orange circle includes a white line icon and a numbered label, with lines extending outward to short text descriptions. The six risks read: hallucinated content, prompt injection attacks, jailbreaking and misuse, open-ended risk exposure, real-time output filtering required, and dynamic oversight needed. The circular layout resembles spokes around a hub, with short explanatory sentences next to each spoke.") Generative AI has changed the risk surface. It's no longer just about models running in the background. These systems now generate content, interact with users, and adapt to inputs in ways that are hard to predict. So responsible AI needs to account for a new set of challenges. Because many of the [GenAI security](https://www.paloaltonetworks.com/cyberpedia/what-is-generative-ai-security) risks are behavioral---beyond purely statistical or architectural concerns. ![A horizontal workflow diagram begins with a simple user icon sending a prompt to a green square labeled Responsible AI, containing a circuit-style brain symbol. A filtered prompt flows to a circular gray icon representing an LLM app with a chat-bubble robot symbol, then a response flows to a second green Responsible AI box before reaching a final user icon. Beneath the first Responsible AI box, a panel lists toxicity detection, PII identification, prompt injection, and off-topic detections, each marked with green check symbols. Beneath the second Responsible AI box, a panel lists interpretability, hallucination score, toxicity score, data leakage, bias/fairness score, and confidence score. Lines labeled input policies and output policies connect the two lower panels to the LLM app.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-responsible-ai/Basic-responsible-GenAI-framework.png "A horizontal workflow diagram begins with a simple user icon sending a prompt to a green square labeled Responsible AI, containing a circuit-style brain symbol. A filtered prompt flows to a circular gray icon representing an LLM app with a chat-bubble robot symbol, then a response flows to a second green Responsible AI box before reaching a final user icon. Beneath the first Responsible AI box, a panel lists toxicity detection, PII identification, prompt injection, and off-topic detections, each marked with green check symbols. Beneath the second Responsible AI box, a panel lists interpretability, hallucination score, toxicity score, data leakage, bias/fairness score, and confidence score. Lines labeled input policies and output policies connect the two lower panels to the LLM app.") **Let's start with model behavior.** [Large language models](https://www.paloaltonetworks.com/cyberpedia/large-language-models-llm) can hallucinate. They can be jailbroken. They can respond to prompts that were never anticipated. Even when the training data is controlled, outputs can still be harmful, misleading, or biased. Especially in open-ended use cases. And these risks don't decrease with scale. They grow. **Then there's output safety.** It's not enough to monitor system performance. You have to monitor what the model produces. Content filtering, scoring systems, and UI-level interventions like user overrides or sandboxed generations all play a role. And that monitoring can't be one-time. It has to be continuous because context shifts as new users, use cases, and adversarial prompts emerge. **On the governance side, monitoring and red teaming have to evolve.** That means behavioral evaluations. It means testing for prompt injection, jailbreak pathways, and ethical alignment. And it means doing this before deployment. Ideally, before anything goes wrong in production. These challenges don't replace traditional responsible AI practices. They build on them. What used to rely on one-time reviews now requires ongoing oversight and real-time behavioral monitoring. In other words: Risk tiering and impact assessments still matter. But GenAI also demands systems that can catch harmful outputs and misuse early. Before they escalate at scale. | ***Further reading:*** * [*Top GenAI Security Challenges: Risks, Issues, \& Solutions*](https://www.paloaltonetworks.com/cyberpedia/generative-ai-security-risks) * [*How to Build a Generative AI Security Policy*](https://www.paloaltonetworks.com/cyberpedia/ai-security-policy) * [*What Is AI Prompt Security? Secure Prompt Engineering Guide*](https://www.paloaltonetworks.com/cyberpedia/what-is-ai-prompt-security) * [*What Is AI Red Teaming? Why You Need It and How to Implement*](https://www.paloaltonetworks.com/cyberpedia/what-is-ai-red-teaming) ![Icon of a network.](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/cyberpedia/what-is-agentic-ai-security/icon-quiz.svg) ## Quiz: How strong is your AI Security posture? Assess your AI governance, development, and deployment practices, plus get solution recommendations. [Take quiz](https://www.paloaltonetworks.com/resources/infographics/interactive-quiz#concerns) ## Responsible AI FAQs ### What is the definition of responsible AI? Responsible AI is the discipline of designing, developing, and governing AI systems so they operate safely, lawfully, and accountably across their lifecycle. It focuses on managing risk, ensuring oversight, documenting decisions, and preventing harmful or unintended outcomes in real‑world use. ### What are the 6 responsible AI principles? Six commonly referenced responsible AI principles are fairness, robustness, transparency, privacy, accountability, and human oversight. These principles guide how AI systems are defined, built, evaluated, and monitored, and map directly to lifecycle actions such as data selection, testing, documentation, and post‑deployment monitoring. ### What is the difference between responsible AI and ethical AI? Responsible AI focuses on operational governance, risk management, and accountability throughout the AI lifecycle. Ethical AI focuses on values, intent, and societal norms. In other words, ethical AI concerns what should happen, while responsible AI concerns what organizations do to prevent harm and ensure trustworthy behavior. ### What are the 4 pillars of responsible AI? Four core pillars commonly used in responsible AI frameworks are governance, risk management, transparency, and accountability. These pillars anchor lifecycle controls, clarify roles, structure review processes, and support the documentation and oversight required for trustworthy, well‑governed AI systems. ### What is an example of responsible AI in real life? A real example is conducting an AI impact assessment before deploying a model. This includes defining intended use, identifying potential harms, evaluating data quality, documenting decision logic, and setting monitoring and escalation procedures aligned with ISO/IEC 42005 and NIST AI RMF practices. Related content [White paper: AI Security: Navigating the New Frontier of Cyber Defense Find out why categorizing AI security as a standard security control can pose significant risks.](https://www.paloaltonetworks.com/resources/whitepapers/ai-security-navigating-the-new-frontier-of-cyber-defense) [Guide: The C-Suite Guide to GenAI Risk Management Learn a strategic framework for managing the risks associated with GenAI.](https://www.paloaltonetworks.com/resources/guides/the-c-suite-guide-to-genai-risk-management) [eBook: Is Your AI Ecosystem Secure? Discover the blueprint for protecting all your AI investments.](https://www.paloaltonetworks.com/resources/ebooks/is-your-ai-ecosystem-secure) [Report: The State of Generative AI 2025 Read the latest data on GenAi adoption and usage.](https://www.paloaltonetworks.com/resources/research/state-of-genai-2025) ![Share page on facebook](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/resources/facebook-circular-icon.svg) ![Share page on linkedin](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/resources/linkedin-circular-icon.svg) [![Share page by an email](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/resources/email-circular-icon.svg)](mailto:?subject=What%20Is%20Responsible%20AI%3F%20Principles%2C%20Pitfalls%2C%20%26%20How-tos&body=Responsible%20AI%20is%20the%20discipline%20of%20designing%2C%20developing%2C%20and%20deploying%20AI%20systems%20in%20ways%20that%20are%20lawful%2C%20safe%2C%20and%20aligned%20with%20human%20values.%20at%20https%3A//www.paloaltonetworks.com/cyberpedia/what-is-responsible-ai) Back to Top {#footer} ## Products and Services * [AI-Powered Network Security Platform](https://www.paloaltonetworks.com/network-security?ts=markdown) * [Secure AI by Design](https://www.paloaltonetworks.com/precision-ai-security/secure-ai-by-design?ts=markdown) * [Prisma AIRS](https://www.paloaltonetworks.com/prisma/prisma-ai-runtime-security?ts=markdown) * [AI Access Security](https://www.paloaltonetworks.com/sase/ai-access-security?ts=markdown) * [Cloud Delivered Security Services](https://www.paloaltonetworks.com/network-security/security-subscriptions?ts=markdown) * [Advanced Threat Prevention](https://www.paloaltonetworks.com/network-security/advanced-threat-prevention?ts=markdown) * [Advanced URL Filtering](https://www.paloaltonetworks.com/network-security/advanced-url-filtering?ts=markdown) * [Advanced WildFire](https://www.paloaltonetworks.com/network-security/advanced-wildfire?ts=markdown) * [Advanced DNS Security](https://www.paloaltonetworks.com/network-security/advanced-dns-security?ts=markdown) * [Enterprise Data Loss Prevention](https://www.paloaltonetworks.com/sase/enterprise-data-loss-prevention?ts=markdown) * [Enterprise IoT Security](https://www.paloaltonetworks.com/network-security/enterprise-device-security?ts=markdown) * [Medical IoT Security](https://www.paloaltonetworks.com/network-security/medical-device-security?ts=markdown) * [Industrial OT Security](https://www.paloaltonetworks.com/network-security/medical-device-security?ts=markdown) * [SaaS Security](https://www.paloaltonetworks.com/sase/saas-security?ts=markdown) * [Next-Generation Firewalls](https://www.paloaltonetworks.com/network-security/next-generation-firewall?ts=markdown) * [Hardware Firewalls](https://www.paloaltonetworks.com/network-security/hardware-firewall-innovations?ts=markdown) * [Software Firewalls](https://www.paloaltonetworks.com/network-security/software-firewalls?ts=markdown) * [Strata Cloud Manager](https://www.paloaltonetworks.com/network-security/strata-cloud-manager?ts=markdown) * [SD-WAN for NGFW](https://www.paloaltonetworks.com/network-security/sd-wan-subscription?ts=markdown) * [PAN-OS](https://www.paloaltonetworks.com/network-security/pan-os?ts=markdown) * [Panorama](https://www.paloaltonetworks.com/network-security/panorama?ts=markdown) * [Secure Access Service Edge](https://www.paloaltonetworks.com/sase?ts=markdown) * [Prisma SASE](https://www.paloaltonetworks.com/sase?ts=markdown) * [Application Acceleration](https://www.paloaltonetworks.com/sase/app-acceleration?ts=markdown) * [Autonomous Digital Experience Management](https://www.paloaltonetworks.com/sase/adem?ts=markdown) * [Enterprise DLP](https://www.paloaltonetworks.com/sase/enterprise-data-loss-prevention?ts=markdown) * [Prisma Access](https://www.paloaltonetworks.com/sase/access?ts=markdown) * [Prisma Browser](https://www.paloaltonetworks.com/sase/prisma-browser?ts=markdown) * [Prisma SD-WAN](https://www.paloaltonetworks.com/sase/sd-wan?ts=markdown) * [Remote Browser Isolation](https://www.paloaltonetworks.com/sase/remote-browser-isolation?ts=markdown) * [SaaS Security](https://www.paloaltonetworks.com/sase/saas-security?ts=markdown) * [AI-Driven Security Operations Platform](https://www.paloaltonetworks.com/cortex?ts=markdown) * [Cloud Security](https://www.paloaltonetworks.com/cortex/cloud?ts=markdown) * [Cortex Cloud](https://www.paloaltonetworks.com/cortex/cloud?ts=markdown) * [Application Security](https://www.paloaltonetworks.com/cortex/cloud/application-security?ts=markdown) * [Cloud Posture Security](https://www.paloaltonetworks.com/cortex/cloud/cloud-posture-security?ts=markdown) * [Cloud Runtime Security](https://www.paloaltonetworks.com/cortex/cloud/runtime-security?ts=markdown) * [Prisma Cloud](https://www.paloaltonetworks.com/prisma/cloud?ts=markdown) * [AI-Driven SOC](https://www.paloaltonetworks.com/cortex?ts=markdown) * [Cortex XSIAM](https://www.paloaltonetworks.com/cortex/cortex-xsiam?ts=markdown) * [Cortex XDR](https://www.paloaltonetworks.com/cortex/cortex-xdr?ts=markdown) * [Cortex XSOAR](https://www.paloaltonetworks.com/cortex/cortex-xsoar?ts=markdown) * [Cortex Xpanse](https://www.paloaltonetworks.com/cortex/cortex-xpanse?ts=markdown) * [Unit 42 Managed Detection \& Response](https://www.paloaltonetworks.com/cortex/managed-detection-and-response?ts=markdown) * [Managed XSIAM](https://www.paloaltonetworks.com/cortex/managed-xsiam?ts=markdown) * [Threat Intel and Incident Response Services](https://www.paloaltonetworks.com/unit42?ts=markdown) * [Proactive Assessments](https://www.paloaltonetworks.com/unit42/assess?ts=markdown) * [Incident Response](https://www.paloaltonetworks.com/unit42/respond?ts=markdown) * [Transform Your Security Strategy](https://www.paloaltonetworks.com/unit42/transform?ts=markdown) * [Discover Threat Intelligence](https://www.paloaltonetworks.com/unit42/threat-intelligence-partners?ts=markdown) ## Company * [About Us](https://www.paloaltonetworks.com/about-us?ts=markdown) * [Careers](https://jobs.paloaltonetworks.com/en/) * [Contact Us](https://www.paloaltonetworks.com/company/contact-sales?ts=markdown) * [Corporate Responsibility](https://www.paloaltonetworks.com/about-us/corporate-responsibility?ts=markdown) * [Customers](https://www.paloaltonetworks.com/customers?ts=markdown) * [Investor Relations](https://investors.paloaltonetworks.com/) * [Location](https://www.paloaltonetworks.com/about-us/locations?ts=markdown) * [Newsroom](https://www.paloaltonetworks.com/company/newsroom?ts=markdown) ## Popular Links * [Blog](https://www.paloaltonetworks.com/blog/?ts=markdown) * [Communities](https://www.paloaltonetworks.com/communities?ts=markdown) * [Content Library](https://www.paloaltonetworks.com/resources?ts=markdown) * [Cyberpedia](https://www.paloaltonetworks.com/cyberpedia?ts=markdown) * [Event Center](https://events.paloaltonetworks.com/) * [Manage Email Preferences](https://start.paloaltonetworks.com/preference-center) * [Products A-Z](https://www.paloaltonetworks.com/products/products-a-z?ts=markdown) * [Product Certifications](https://www.paloaltonetworks.com/legal-notices/trust-center/compliance?ts=markdown) * [Report a Vulnerability](https://www.paloaltonetworks.com/security-disclosure?ts=markdown) * [Sitemap](https://www.paloaltonetworks.com/sitemap?ts=markdown) * [Tech Docs](https://docs.paloaltonetworks.com/) * [Unit 42](https://unit42.paloaltonetworks.com/) * [Do Not Sell or Share My Personal Information](https://panwedd.exterro.net/portal/dsar.htm?target=panwedd) ![PAN logo](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/pan-logo-dark.svg) * [Privacy](https://www.paloaltonetworks.com/legal-notices/privacy?ts=markdown) * [Trust Center](https://www.paloaltonetworks.com/legal-notices/trust-center?ts=markdown) * [Terms of Use](https://www.paloaltonetworks.com/legal-notices/terms-of-use?ts=markdown) * [Documents](https://www.paloaltonetworks.com/legal?ts=markdown) Copyright © 2026 Palo Alto Networks. All Rights Reserved * [![Youtube](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/social/youtube-black.svg)](https://www.youtube.com/user/paloaltonetworks) * [![Podcast](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/icons/podcast.svg)](https://www.paloaltonetworks.com/podcasts/threat-vector?ts=markdown) * [![Facebook](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/social/facebook-black.svg)](https://www.facebook.com/PaloAltoNetworks/) * [![LinkedIn](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/social/linkedin-black.svg)](https://www.linkedin.com/company/palo-alto-networks) * [![Twitter](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/social/twitter-x-black.svg)](https://twitter.com/PaloAltoNtwks) * EN Select your language