* [Blog](https://www.paloaltonetworks.com/blog) * [Cloud Security](https://www.paloaltonetworks.com/blog/cloud-security/) * [Data Security](https://www.paloaltonetworks.com/blog/category/data-security/) * OpenAI Custom GPTs: What ... # OpenAI Custom GPTs: What You Need to Worry About [](https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fwww.paloaltonetworks.com%2Fblog%2Fcloud-security%2Fopenai-custom-gpts-security%2F) [](https://twitter.com/share?text=OpenAI+Custom+GPTs%3A+What+You+Need+to+Worry+About&url=https%3A%2F%2Fwww.paloaltonetworks.com%2Fblog%2Fcloud-security%2Fopenai-custom-gpts-security%2F) [](https://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Fwww.paloaltonetworks.com%2Fblog%2Fcloud-security%2Fopenai-custom-gpts-security%2F&title=OpenAI+Custom+GPTs%3A+What+You+Need+to+Worry+About&summary=&source=) [](https://www.paloaltonetworks.com//www.reddit.com/submit?url=https://www.paloaltonetworks.com/blog/cloud-security/openai-custom-gpts-security/&ts=markdown) \[\](mailto:?subject=OpenAI Custom GPTs: What You Need to Worry About) Link copied By [Ofir Balassiano](https://www.paloaltonetworks.com/blog/author/ofir-balassiano/?ts=markdown "Posts by Ofir Balassiano") and [David Nir Orlovsky](https://www.paloaltonetworks.com/blog/author/david-nir-orlovsky/?ts=markdown "Posts by David Nir Orlovsky") Feb 15, 2024 9 minutes [Data Security](https://www.paloaltonetworks.com/blog/category/data-security/?ts=markdown) [Data Security Posture Management](https://www.paloaltonetworks.com/blog/cloud-security/category/data-security-posture-management/?ts=markdown) [DevOps](https://www.paloaltonetworks.com/blog/cloud-security/category/devops/?ts=markdown) [Research](https://www.paloaltonetworks.com/blog/cloud-security/category/research/?ts=markdown) [Application Security](https://www.paloaltonetworks.com/blog/tag/application-security/?ts=markdown) [Cloud Research](https://www.paloaltonetworks.com/blog/tag/cloud-research/?ts=markdown) The integration of OpenAI's Custom GPTs with personal data files and third-party APIs offers new opportunities for organizations looking for custom LLMs for a variety of needs. They also open the door to many significant security risks, particularly accidental leakage of sensitive data through uploaded files and API interactions. Additionally, external APIs can subtly change GPT's responses through prompt injections. Clearly, it's essential to keep tabs on the data you input with an understanding of the potential risks involved with OpenAI's new advanced features. ## Extending ChatGPT OpenAI has released [GPTs](https://openai.com/blog/introducing-gpts), enabling you to create custom versions of ChatGPT for specific purposes. To create your GPT, you need only to extend ChatGPT's capabilities for specific tasks and domain knowledge. To extend ChatGPT's knowledge, OpenAI enables organizations to add data in the form of files and third-party API integration. This is big news, since you can now easily build and deliver LLM chatbots without an AI team. Take, for example, a retail company building an LLM bot that promotes new products and publishes it in the GPT marketplace. Or an organization's internal HR bot that helps new employees on board. With these new features, however, come new security concerns. In this blog post, we'll address the concerns from the perspective of the Custom GPT creator and the individual who uses it. ## New Capabilities at a Glance ### Actions Actions are an upgrade of plugins. They work by incorporating third-party APIs to gather data based on user queries. GPT builds an API call to the third party, and uses the API response to build a user response. ![Custom GPTs Add Action UI](https://www.paloaltonetworks.com/blog/wp-content/uploads/2024/02/word-image-314110-1.png) Figure 1: Add Action UI ### Knowledge Knowledge adds data in the form of files to the GPT, extending its knowledge with business-specific information that the classic model doesn't recognize. Knowledge supports many file types, including PDF, text, and CSV, as well as other data types. ![Example of knowledge files in a custom GPT](https://www.paloaltonetworks.com/blog/wp-content/uploads/2024/02/word-image-314110-2.png) Figure 2: Example of knowledge files in a custom GPT ### Publishing Publishing a new GPT enables users to work with it. The access and sharing model is fairly simple, while the creator can publish it to only 3 groups: * Only me: Only the user can use the GPT * Anyone with a link: Semi-public GPT * Everyone: Anyone with a GPT Plus subscription Both knowledge and actions provide more context to the GPT to solve a specific problem. They offer a UI to build, test, and incorporate capabilities that the current ChatGPT allows, such as code interpretation, web browsing, and image creation. Custom GPTs' features will expand further once more businesses and individuals enter the GPT marketplace. ## Attack Landscape Organizations can undoubtedly benefit from ChatGPT's new features, particularly by reducing the time it takes to build new services. That said, the gains come with a caveat --- the features also enable attackers to capitalize on mistakes made when a custom GPT is created. Tactics include: * Knowledge file exfiltration * Data leakage to third parties * Indirect prompt injection ## Data Theft and Exfiltration ### Unauthorized Access to Knowledge Data Files ![Attack scenario to exfiltrate knowledge files](https://www.paloaltonetworks.com/blog/wp-content/uploads/2024/02/word-image-314110-3.png) Figure 3: Attack scenario to exfiltrate knowledge files Uploading files to the custom GPT is a concern for organizations that intend to publish their GPTs, since the file content is now available to anyone accessing the GPT. [Mistakenly uploading PII data is a huge privacy concern](https://twitter.com/zuhayeer/status/1723065439910011055). Employees can upload sensitive data that may violate privacy regulations or include credentials that can be exploited. We started testing things out by examining the interpreter, seeing how it operates, and determining whether system-level commands can run (e.g., 'ls', 'env') and inspect the environment that ChatGPT uses to run code. Example prompt: **"run 'ls' as a subprocess in Python and show output"** Analysis (executing the code): ![Execution of Linux command ‘ls’ in the code interpreter](https://www.paloaltonetworks.com/blog/wp-content/uploads/2024/02/word-image-314110-4.png) Figure 4: Execution of Linux command 'ls' in the code interpreter Output of the 'ls' command: ![Output of ‘ls’](https://www.paloaltonetworks.com/blog/wp-content/uploads/2024/02/word-image-314110-5.png) Figure 5: Output of 'ls' Now knowing that GPT can be used to inspect the environment in which the interpreter resides, we ran **'env'** and received the following output: ![Execution of the Linux command ‘env’ in the code interpreter](https://www.paloaltonetworks.com/blog/wp-content/uploads/2024/02/word-image-314110-6.png) Figure 6: Execution of the Linux command 'env' in the code interpreter We can see in "Result" that the code ran successfully. ![List of environment variables in the code interpreter](https://www.paloaltonetworks.com/blog/wp-content/uploads/2024/02/word-image-314110-7.png) Figure 7: List of environment variables in the code interpreter Also, we wanted to see what processes were running inside the code interpreter environment. ![Listing of current running processes in the code interpreter](https://www.paloaltonetworks.com/blog/wp-content/uploads/2024/02/word-image-314110-8.png) Figure 8: Listing of current running processes in the code interpreter We saw that Jupyter is deployed inside the code interpreter, and ChatGPT most likely uses it to run Python code. **This leads us to conclude that the interpreter runs in a Kubernetes pod with a Jupyter Labs process in an isolated environment.** Further examination of the interpreter revealed that a GPT with a code interpreter feature could be utilized to retrieve originally uploaded files. This functionality presents a vulnerability, since malicious entities can exploit it to execute system-level commands using Python code and access-sensitive files. In this example, an employee who doesn't properly understand the implications could use GPT to unintentionally upload sensitive files to it. Our research found that files uploaded to a Custom GPT are saved under the path **"/mnt/data."** ![Example knowledge files in a custom GPT](https://www.paloaltonetworks.com/blog/wp-content/uploads/2024/02/word-image-314110-9.png) Figure 9: Example knowledge files in a custom GPT ![Listing knowledge files from the code interpreter](https://www.paloaltonetworks.com/blog/wp-content/uploads/2024/02/word-image-314110-10.png) Figure 10: Listing knowledge files from the code interpreter (Under "Result" we can see the file names that we uploaded earlier.) With proper prompt utilization, it can also be used to look into file content (e.g PDF, text, binary). ![Reading a knowledge file from the chat](https://www.paloaltonetworks.com/blog/wp-content/uploads/2024/02/word-image-314110-11.png) Figure 11: Reading a knowledge file from the chat Since the GPT marketplace has been released, creators should be aware of sensitive files that can be uploaded to the Custom GPT. Prior to uploading, they should check whether their files contain sensitive information, such as personal information or intellectual property, that shouldn't be exposed. ## Exposing Sensitive Data in GPT Actions ![Attack scenario to exfiltrate sensitive data via a third-party API](https://www.paloaltonetworks.com/blog/wp-content/uploads/2024/02/word-image-314110-12.png) Figure 12: Attack scenario to exfiltrate sensitive data via a third-party API ChatGPT records everything we type. That's the de facto standard regarding the private data --- from telemetry data to the prompts themselves --- we share with OpenAI. With the impending actions feature, users should be concerned about third-party APIs that can collect user data from the ChatGPT service. When using actions, data is sent to the third-party API, which ChatGPT formats. This data might be sensitive based on an organization's standards, and then leaked. Let's look at an example of a GPT we built with an action that gets information based on user location and bank. ![Sensitive data sent in our scenario](https://www.paloaltonetworks.com/blog/wp-content/uploads/2024/02/word-image-314110-13.png) Figure 13: Sensitive data sent in our scenario ChatGPT currently has no mechanism to stop sending PII data (in this case, location information), and third-party APIs can and will collect that data. Let's look at some API calls made in chat.openai.com. One API call is metadata information about a custom model. \*\*GET /backend-api/gizmos/\\*\*occurs when the ChatGPT UI loads the model and the chat window opens. We discern some interesting information. Let's start with model metadata. ![Snippet from our custom GPT metadata](https://www.paloaltonetworks.com/blog/wp-content/uploads/2024/02/word-image-314110-14.png) Figure 14: Snippet from our custom GPT metadata Even more interesting is a spec file containing the swagger API of the actions. ![”raw\_spec” contains the third-party swagger](https://www.paloaltonetworks.com/blog/wp-content/uploads/2024/02/word-image-314110-15.png) Figure 15: "raw\_spec" contains the third-party swagger And we can see the API swagger file used. ![Formatted swagger](https://www.paloaltonetworks.com/blog/wp-content/uploads/2024/02/word-image-314110-16.png) Figure 16: Formatted swagger Here we can see what API calls are being made and, more importantly, what data can be sent. This gives us additional knowledge about what type of data can leak and how it can happen. Organizations should be more concerned about data leakage to third-party APIs using ChatGPT and increase their cybersecurity awareness, particularly as it pertains to using new Custom GPTs. ## Indirect Prompt Injections in Actions ![Attack scenario of prompt injection from a third-party API](https://www.paloaltonetworks.com/blog/wp-content/uploads/2024/02/word-image-314110-17.png) Figure 17: Attack scenario of prompt injection from a third-party API Another interesting observation we made isn't a classic security risk but definitely worth noting. A bad actor can use actions as a basis of prompt injections to change the "narrative" of the whole chat based on API responses without user knowledge. Let's take an example of an action --- a third-party API that returns information based on user input. Here we use an HTTP server to return responses. Schema: ![](https://www.paloaltonetworks.com/blog/wp-content/uploads/2024/02/word-image-314110-18.png) When an action is used in the chat, the following window appears: ![ChatGPT confirmation to use the action](https://www.paloaltonetworks.com/blog/wp-content/uploads/2024/02/word-image-314110-19.png) Figure 18: ChatGPT confirmation to use the action If the user confirms, the following API call is sent: Here the prompt injection takes place, and the API responds with: The response contains detailed instructions as to how the GPT should act. Figures 21 and 22 illustrate how the custom GPT might answer a question before and after the action. **Before the action:** ![The answer for “What is python” before the prompt injection](https://www.paloaltonetworks.com/blog/wp-content/uploads/2024/02/word-image-314110-20.png) Figure 19: The answer for "What is python" before the prompt injection **And after:** ![The answer for “What is python” after the prompt injection](https://www.paloaltonetworks.com/blog/wp-content/uploads/2024/02/word-image-314110-21.png) Figure 20: The answer for "What is python" after the prompt injection With bad actors using prompt injections in APIs to influence the LLM model output, a real-world example could prove damaging. The risk is great, considering the inability to distinguish between the action data and the user input. What's more, there's no visibility into the prompt injection, and it happens right under the user's nose. ![Only the sent data is reported to the user.](https://www.paloaltonetworks.com/blog/wp-content/uploads/2024/02/word-image-314110-22.png) Figure 21: Only the sent data is reported to the user. The prompt injection can significantly impact custom GPTs that utilize actions, altering the conversation narrative with the LLM model without the user's knowledge. This vulnerability can be exploited to spread misinformation, bypass safeguards, and undermine trust in AI systems. ## Learn More Is your sensitive data secure? Discover the latest trends, risks, and best practices in data security, based on analysis of over 13 billion files and 8 petabytes of data stored in public cloud environments. Download your copy of the [State of Cloud Data Security 2023 report](https://www.paloaltonetworks.com/resources/research/data-security-2023-report?ts=markdown) today. And [give Prisma Cloud a test drive](https://www.paloaltonetworks.com/prisma/request-a-prisma-cloud-trial?ts=markdown) if you haven't experienced the advantage of best-in-class Code to Cloud security. *** ** * ** *** ## Related Blogs ### [Data Security](https://www.paloaltonetworks.com/blog/category/data-security/?ts=markdown), [Data Security Posture Management](https://www.paloaltonetworks.com/blog/cloud-security/category/data-security-posture-management/?ts=markdown) [#### Are Cloud Serverless Functions Exposing Your Data?](https://www.paloaltonetworks.com/blog/cloud-security/secure-access-cloud-serverless-functions/) ### [Data Security](https://www.paloaltonetworks.com/blog/category/data-security/?ts=markdown), [Data Security Posture Management](https://www.paloaltonetworks.com/blog/cloud-security/category/data-security-posture-management/?ts=markdown), [Research](https://www.paloaltonetworks.com/blog/cloud-security/category/research/?ts=markdown) [#### Mastering Data Flow: Enhancing Security and Compliance in the Cloud](https://www.paloaltonetworks.com/blog/cloud-security/mastering-data-flow-security-compliance/) ### [AppSec](https://www.paloaltonetworks.com/blog/cloud-security/category/appsec/?ts=markdown), [CI/CD](https://www.paloaltonetworks.com/blog/cloud-security/category/ci-cd/?ts=markdown), [Research](https://www.paloaltonetworks.com/blog/cloud-security/category/research/?ts=markdown) [#### Third-Party GitHub Actions: Effects of an Opt-Out Permission Model](https://www.paloaltonetworks.com/blog/cloud-security/github-actions-opt-out-permissions-model/) ### [Data Security](https://www.paloaltonetworks.com/blog/category/data-security/?ts=markdown), [Data Security Posture Management](https://www.paloaltonetworks.com/blog/cloud-security/category/data-security-posture-management/?ts=markdown) [#### Is Your Snowflake Data at Risk? Find and Protect Sensitive Data with DSPM](https://www.paloaltonetworks.com/blog/cloud-security/protect-sensitive-data-dspm-snowflake/) ### [Announcement](https://www.paloaltonetworks.com/blog/cloud-security/category/announcement/?ts=markdown), [Data Security](https://www.paloaltonetworks.com/blog/category/data-security/?ts=markdown), [Data Security Posture Management](https://www.paloaltonetworks.com/blog/cloud-security/category/data-security-posture-management/?ts=markdown) [#### Data Security, Meet Remediation: Introducing the New Integration Between Prisma Cloud DSPM and Cortex XSOAR](https://www.paloaltonetworks.com/blog/cloud-security/dspm-xsoar-data-security/) ### [Data Security](https://www.paloaltonetworks.com/blog/category/data-security/?ts=markdown), [Data Security Posture Management](https://www.paloaltonetworks.com/blog/cloud-security/category/data-security-posture-management/?ts=markdown) [#### DSPM-Driven Data Context to Improve Attack Path Analysis and Prioritization](https://www.paloaltonetworks.com/blog/cloud-security/dspm-attack-path-prioritization/) ### Subscribe to Cloud Security Blogs! Sign up to receive must-read articles, Playbooks of the Week, new feature announcements, and more. ![spinner](https://www.paloaltonetworks.com/blog/wp-content/themes/panwblog2023/dist/images/ajax-loader.gif) Sign up Please enter a valid email. By submitting this form, you agree to our [Terms of Use](https://www.paloaltonetworks.com/legal-notices/terms-of-use?ts=markdown) and acknowledge our [Privacy Statement](https://www.paloaltonetworks.com/legal-notices/privacy?ts=markdown). Please look for a confirmation email from us. If you don't receive it in the next 10 minutes, please check your spam folder. This site is protected by reCAPTCHA and the Google [Privacy Policy](https://policies.google.com/privacy) and [Terms of Service](https://policies.google.com/terms) apply. {#footer} {#footer} ## Products and Services * [AI-Powered Network Security Platform](https://www.paloaltonetworks.com/network-security?ts=markdown) * [Secure AI by Design](https://www.paloaltonetworks.com/precision-ai-security/secure-ai-by-design?ts=markdown) * [Prisma AIRS](https://www.paloaltonetworks.com/prisma/prisma-ai-runtime-security?ts=markdown) * [AI Access Security](https://www.paloaltonetworks.com/sase/ai-access-security?ts=markdown) * [Cloud Delivered Security Services](https://www.paloaltonetworks.com/network-security/security-subscriptions?ts=markdown) * [Advanced Threat Prevention](https://www.paloaltonetworks.com/network-security/advanced-threat-prevention?ts=markdown) * [Advanced URL Filtering](https://www.paloaltonetworks.com/network-security/advanced-url-filtering?ts=markdown) * [Advanced WildFire](https://www.paloaltonetworks.com/network-security/advanced-wildfire?ts=markdown) * [Advanced DNS Security](https://www.paloaltonetworks.com/network-security/advanced-dns-security?ts=markdown) * [Enterprise Data Loss Prevention](https://www.paloaltonetworks.com/sase/enterprise-data-loss-prevention?ts=markdown) * [Enterprise IoT Security](https://www.paloaltonetworks.com/network-security/enterprise-device-security?ts=markdown) * [Medical IoT Security](https://www.paloaltonetworks.com/network-security/medical-device-security?ts=markdown) * [Industrial OT Security](https://www.paloaltonetworks.com/network-security/medical-device-security?ts=markdown) * [SaaS Security](https://www.paloaltonetworks.com/sase/saas-security?ts=markdown) * [Next-Generation Firewalls](https://www.paloaltonetworks.com/network-security/next-generation-firewall?ts=markdown) * [Hardware Firewalls](https://www.paloaltonetworks.com/network-security/hardware-firewall-innovations?ts=markdown) * [Software Firewalls](https://www.paloaltonetworks.com/network-security/software-firewalls?ts=markdown) * [Strata Cloud Manager](https://www.paloaltonetworks.com/network-security/strata-cloud-manager?ts=markdown) * [SD-WAN for NGFW](https://www.paloaltonetworks.com/network-security/sd-wan-subscription?ts=markdown) * [PAN-OS](https://www.paloaltonetworks.com/network-security/pan-os?ts=markdown) * [Panorama](https://www.paloaltonetworks.com/network-security/panorama?ts=markdown) * [Secure Access Service Edge](https://www.paloaltonetworks.com/sase?ts=markdown) * [Prisma SASE](https://www.paloaltonetworks.com/sase?ts=markdown) * [Application Acceleration](https://www.paloaltonetworks.com/sase/app-acceleration?ts=markdown) * [Autonomous Digital Experience Management](https://www.paloaltonetworks.com/sase/adem?ts=markdown) * [Enterprise DLP](https://www.paloaltonetworks.com/sase/enterprise-data-loss-prevention?ts=markdown) * [Prisma Access](https://www.paloaltonetworks.com/sase/access?ts=markdown) * [Prisma Browser](https://www.paloaltonetworks.com/sase/prisma-browser?ts=markdown) * [Prisma SD-WAN](https://www.paloaltonetworks.com/sase/sd-wan?ts=markdown) * [Remote Browser Isolation](https://www.paloaltonetworks.com/sase/remote-browser-isolation?ts=markdown) * [SaaS Security](https://www.paloaltonetworks.com/sase/saas-security?ts=markdown) * [AI-Driven Security Operations Platform](https://www.paloaltonetworks.com/cortex?ts=markdown) * [Cloud Security](https://www.paloaltonetworks.com/cortex/cloud?ts=markdown) * [Cortex Cloud](https://www.paloaltonetworks.com/cortex/cloud?ts=markdown) * [Application Security](https://www.paloaltonetworks.com/cortex/cloud/application-security?ts=markdown) * [Cloud Posture Security](https://www.paloaltonetworks.com/cortex/cloud/cloud-posture-security?ts=markdown) * [Cloud Runtime Security](https://www.paloaltonetworks.com/cortex/cloud/runtime-security?ts=markdown) * [Prisma Cloud](https://www.paloaltonetworks.com/prisma/cloud?ts=markdown) * [AI-Driven SOC](https://www.paloaltonetworks.com/cortex?ts=markdown) * [Cortex XSIAM](https://www.paloaltonetworks.com/cortex/cortex-xsiam?ts=markdown) * [Cortex XDR](https://www.paloaltonetworks.com/cortex/cortex-xdr?ts=markdown) * [Cortex XSOAR](https://www.paloaltonetworks.com/cortex/cortex-xsoar?ts=markdown) * [Cortex Xpanse](https://www.paloaltonetworks.com/cortex/cortex-xpanse?ts=markdown) * [Unit 42 Managed Detection \& Response](https://www.paloaltonetworks.com/cortex/managed-detection-and-response?ts=markdown) * [Managed XSIAM](https://www.paloaltonetworks.com/cortex/managed-xsiam?ts=markdown) * [Threat Intel and Incident Response Services](https://www.paloaltonetworks.com/unit42?ts=markdown) * [Proactive Assessments](https://www.paloaltonetworks.com/unit42/assess?ts=markdown) * [Incident Response](https://www.paloaltonetworks.com/unit42/respond?ts=markdown) * [Transform Your Security Strategy](https://www.paloaltonetworks.com/unit42/transform?ts=markdown) * [Discover Threat Intelligence](https://www.paloaltonetworks.com/unit42/threat-intelligence-partners?ts=markdown) ## Company * [About Us](https://www.paloaltonetworks.com/about-us?ts=markdown) * [Careers](https://jobs.paloaltonetworks.com/en/) * [Contact Us](https://www.paloaltonetworks.com/company/contact-sales?ts=markdown) * [Corporate Responsibility](https://www.paloaltonetworks.com/about-us/corporate-responsibility?ts=markdown) * [Customers](https://www.paloaltonetworks.com/customers?ts=markdown) * [Investor Relations](https://investors.paloaltonetworks.com/) * [Location](https://www.paloaltonetworks.com/about-us/locations?ts=markdown) * [Newsroom](https://www.paloaltonetworks.com/company/newsroom?ts=markdown) ## Popular Links * [Blog](https://www.paloaltonetworks.com/blog/?ts=markdown) * [Communities](https://www.paloaltonetworks.com/communities?ts=markdown) * [Content Library](https://www.paloaltonetworks.com/resources?ts=markdown) * [Cyberpedia](https://www.paloaltonetworks.com/cyberpedia?ts=markdown) * [Event Center](https://events.paloaltonetworks.com/) * [Manage Email Preferences](https://start.paloaltonetworks.com/preference-center) * [Products A-Z](https://www.paloaltonetworks.com/products/products-a-z?ts=markdown) * [Product Certifications](https://www.paloaltonetworks.com/legal-notices/trust-center/compliance?ts=markdown) * [Report a Vulnerability](https://www.paloaltonetworks.com/security-disclosure?ts=markdown) * [Sitemap](https://www.paloaltonetworks.com/sitemap?ts=markdown) * [Tech Docs](https://docs.paloaltonetworks.com/) * [Unit 42](https://unit42.paloaltonetworks.com/) * [Do Not Sell or Share My Personal Information](https://panwedd.exterro.net/portal/dsar.htm?target=panwedd) ![PAN logo](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/pan-logo-dark.svg) * [Privacy](https://www.paloaltonetworks.com/legal-notices/privacy?ts=markdown) * [Trust Center](https://www.paloaltonetworks.com/legal-notices/trust-center?ts=markdown) * [Terms of Use](https://www.paloaltonetworks.com/legal-notices/terms-of-use?ts=markdown) * [Documents](https://www.paloaltonetworks.com/legal?ts=markdown) Copyright © 2026 Palo Alto Networks. All Rights Reserved * [![Youtube](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/social/youtube-black.svg)](https://www.youtube.com/user/paloaltonetworks) * [![Podcast](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/icons/podcast.svg)](https://www.paloaltonetworks.com/podcasts/threat-vector?ts=markdown) * [![Facebook](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/social/facebook-black.svg)](https://www.facebook.com/PaloAltoNetworks/) * [![LinkedIn](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/social/linkedin-black.svg)](https://www.linkedin.com/company/palo-alto-networks) * [![Twitter](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/social/twitter-x-black.svg)](https://twitter.com/PaloAltoNtwks) * EN Select your language