* [Blog](https://www.paloaltonetworks.com/blog) * [Cloud Security](https://www.paloaltonetworks.com/blog/cloud-security/) * [AI Security](https://www.paloaltonetworks.com/blog/category/ai-security/) * Deploying Secure LLM and ... # Deploying Secure LLM and RAG Applications with Amazon Bedrock and Prisma Cloud [](https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fwww.paloaltonetworks.com%2Fblog%2Fcloud-security%2Fdeploy-secure-llm-rag-applications%2F) [](https://twitter.com/share?text=Deploying+Secure+LLM+and+RAG+Applications+with+Amazon+Bedrock+and+Prisma+Cloud&url=https%3A%2F%2Fwww.paloaltonetworks.com%2Fblog%2Fcloud-security%2Fdeploy-secure-llm-rag-applications%2F) [](https://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Fwww.paloaltonetworks.com%2Fblog%2Fcloud-security%2Fdeploy-secure-llm-rag-applications%2F&title=Deploying+Secure+LLM+and+RAG+Applications+with+Amazon+Bedrock+and+Prisma+Cloud&summary=&source=) [](https://www.paloaltonetworks.com//www.reddit.com/submit?url=https://www.paloaltonetworks.com/blog/cloud-security/deploy-secure-llm-rag-applications/&ts=markdown) \[\](mailto:?subject=Deploying Secure LLM and RAG Applications with Amazon Bedrock and Prisma Cloud) Link copied By [Sharon Farber](https://www.paloaltonetworks.com/blog/author/sharon-farber/?ts=markdown "Posts by Sharon Farber") Dec 03, 2024 7 minutes [AI Security](https://www.paloaltonetworks.com/blog/category/ai-security/?ts=markdown) [AI Security Posture Management](https://www.paloaltonetworks.com/blog/cloud-security/category/ai-security-posture-management/?ts=markdown) [CSPM](https://www.paloaltonetworks.com/blog/cloud-security/category/cspm/?ts=markdown) [DSPM](https://www.paloaltonetworks.com/blog/cloud-security/category/dspm/?ts=markdown) [AWS](https://www.paloaltonetworks.com/blog/tag/aws/?ts=markdown) Rapid advancements in generative AI have enabled organizations to build powerful applications by integrating foundation models with internal data and systems. The potential rewards are high, but so are the risks --- from [sensitive data](https://www.paloaltonetworks.com/cyberpedia/sensitive-data?ts=markdown) exposure and compliance violations to prompt injection and attacks against deployed applications. To mitigate AI-related risk, organizations need a comprehensive approach that covers the application lifecycle from data collection to model output. In today's blog post, we look at the current state of [genAI security](https://www.paloaltonetworks.com/cyberpedia/ai-security?ts=markdown) and how Prisma Cloud by Palo Alto Networks can be used alongside Amazon Bedrock to build, manage and deploy secure, AI-powered applications. [Prisma Cloud AI-SPM](https://www.paloaltonetworks.com/prisma/cloud/ai-spm?ts=markdown) (AI security posture management) and [Prisma Cloud DSPM](https://www.paloaltonetworks.com/prisma/cloud/cloud-data-security?ts=markdown) (data security posture management) are integrated capabilities that provide visibility, risk analysis and security controls for AI assets and sensitive data across cloud environments. Amazon Bedrock is a fully managed service that offers access to a range of foundation models and tools for building and deploying generative AI applications securely, including fine-tuning, RAG, developer experience and customization tools. Amazon Bedrock Guardrails, which has recently been made available as a standalone API, adds runtime protection capabilities that, among other things, block harmful content and identify hallucinations in [LLM](https://www.paloaltonetworks.com/cyberpedia/large-language-models-llm?ts=markdown) outputs. ## Challenges of Securing GenAI as It Matures from Novelty to Feature As organizations move from experimenting with generative AI to deploying it at scale, three key trends have emerged, each with security implications. ### 1. Integration with internal data and systems highlights data risk. "Vanilla" LLMs need to be augmented with proprietary data to create significant enterprise value. To that end, organizations are fine-tuning models on domain-specific data or using techniques like retrieval augmented generation (RAG) to inject relevant information into prompts to create more customized and capable AI assistants. This integration, however, creates new attack surfaces. Sensitive data used for fine-tuning or passed to the model at inference time can inadvertently be exposed in the model's outputs. For example, an LLM fine-tuned on a company's internal documents might leak snippets of confidential information when asked the right prompts. Organizations must meticulously control access to inference data and monitor model inputs and outputs to prevent data exfiltration and comply with data privacy regulations. Figure 1 illustrates how sensitive data can be exposed by a fine-tuned model. Risk materializes during the fine-tuning process and becomes embedded in the model. The only means to remove the risk after the fact will require redeploying the model. ![Sensitive data exposed by a fine-tuned model](https://www.paloaltonetworks.com/blog/wp-content/uploads/2024/11/word-image-332072-1.jpeg) Figure 1: Sensitive data exposed by a fine-tuned model RAG poses another risk of data exposure. The example seen in figure 2 is more straightforward to mitigate than the above example, as the exposure occurs at the "retrieval" stage and is not inherent in the model. Removing sensitive data from the inference data eliminates the risk. ![Personally identifiable information (PII) exposure via RAG](https://www.paloaltonetworks.com/blog/wp-content/uploads/2024/11/word-image-332072-2.jpeg) Figure 2: Personally identifiable information exposure via RAG ### 2. Proliferation of models and frameworks complicates effective governance efforts. The OpenAI API was the only viable option for adding LLM capabilities to applications just a few years ago. Today, developers can choose from a range of open-source foundation models --- including specific models tailored for image, speech or video generation. Deployment models have also diversified. In addition to inference APIs offered as SaaS, models can now be deployed in private cloud environments, self-managed infrastructure and, in some cases, locally. Variety enables developers to choose the most efficient and effective model per task, ultimately reducing costs and increasing quality. But variety also makes life more complicated for security teams. Shadow AI deployments using unsanctioned models may crop up, making it difficult to assess the overall security posture. The supply chain for models also needs scrutiny. A malicious actor, for example, could publish a backdoored model or dataset aiming to poison downstream models on which it's trained. Organizations need ways to discover models deployed across their environment and trace their provenance and lineage. ### 3. The shift from experimentation to production requires new controls. As companies move beyond developing initial proof of concepts to deploying genAI in production, a new set of application security risks have emerged. Prompt injection attacks, where cleverly crafted text prompts elicit unintended behaviors from a model, have already targeted popular models and AI assistants. Attackers use these techniques to bypass content filters, trick AI systems into accessing unauthorized resources or expose sensitive information via the model's outputs. To prevent attacks targeting models at runtime, organizations need to proactively identify cases where a model is deployed without adequate safety and security controls (such as output monitoring). Identifying these issues early can mitigate risks that prove harder to catch and prevent once an AI application is in the wild. ## Securing LLM-Powered Applications from Data to Runtime with Prisma Cloud and Amazon Bedrock Amazon Bedrock and Prisma Cloud provide a complete solution to help organizations deploy secure LLM-powered applications --- and protect them across the entire lifecycle. ### Protecting Data Used for RAG or Fine-Tuning Protecting the data used to train, fine-tune or augment models is the foundation of secure AI applications. Prisma Cloud DSPM provides comprehensive visibility into your inference data, whether it's stored in S3 buckets, databases or other cloud services. The solution can identify a database containing sensitive customer information being used in RAG workflows such as OpenSearch, for example. This knowledge enables security and compliance teams to weigh in on whether the data should become part of the application's inference data. ![Prisma Cloud AI-SPM shows an AI application containing sensitive data.](https://www.paloaltonetworks.com/blog/wp-content/uploads/2024/11/word-image-332072-3.png) Figure 3: Prisma Cloud AI-SPM shows an AI application containing sensitive data. Prisma Cloud goes beyond simple discovery to mapping cloud services and identities with access to particular data sources. This context helps security teams prioritize their efforts, enabling them to focus initially on protecting the most sensitive and widely accessible data. Amazon Bedrock serves as a centralized platform for RAG operations, providing clear visibility into which services are being used for inference and ensuring that access to internal data is properly managed and logged. ### Identifying Model Misconfigurations with AI-SPM The broad range of models used across development, testing and production environments creates a need for visibility and control. Amazon Bedrock centralizes model deployment and management, giving organizations a dedicated, managed service to deploy AI in your environment. Centralizing these services makes it easier to track which models are in use, who has access to them, and how they're being utilized. Additionally, [Custom Model Import](https://aws.amazon.com/bedrock/custom-model-import/) (currently in preview) can be used to import weights for fine-tuned or modified models stored in your AWS account. ![Prisma Cloud AI-SPM showing alerts associated with AI applications](https://www.paloaltonetworks.com/blog/wp-content/uploads/2024/11/word-image-332072-4.png) Figure 4:: Prisma Cloud AI-SPM showing alerts associated with AI applications Complementing this, Prisma Cloud AI-SPM enables you to see which models are deployed in your cloud environment --- whether via Bedrock or other APIs --- and identify risks related to the data used for training or fine-tuning. For instance, Prisma Cloud might detect a publicly accessible AI-powered chatbot fine-tuned on sensitive data. Or it could identify model weights stored on S3 and ensure protections are applied to prevent supply chain attacks. ### Protecting the Application Layer Avoiding post-deployment attacks on AI applications is the final piece of the puzzle. [Amazon Bedrock Guardrails](https://aws.amazon.com/bedrock/guardrails/) makes it easy to monitor prompts sent to LLM APIs, helping to prevent prompt injection attacks and other misuses. Organizations can tailor the guardrails to their use case and risk tolerance, enabling them to set up content filtering, implement input validation and control model outputs. Prisma Cloud complements this by identifying AI assets that lack sufficient security controls or safety tools, such as models deployed without content filtering. The proactive approach allows security teams to address vulnerabilities before they're exploited, with Bedrock Guardrails providing an additional layer of control. ## Build Your AI Applications with Security at the Core Organizations can't afford to fall behind in the race to production AI, nor can they compromise on critical security features. By combining Prisma Cloud's comprehensive visibility and risk analysis with Amazon Bedrock and Amazon Bedrock Guardrails, they can confidently build and deploy AI applications, knowing they're protected from data to model to deployment. ### Next Steps * Download [AI Governance for AI-Powered Applications](https://www.paloaltonetworks.com/resources/whitepapers/ai-governance?ts=markdown) to learn how to establish a governance framework for your AI-powered applications. * Visit the [Prisma Cloud website to learn more about Prisma Cloud AI-SPM.](https://www.paloaltonetworks.com/prisma/cloud/ai-spm?ts=markdown) * Explore [Prisma Cloud offerings on the AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-fhoptf6o4hcyu?sr=0-7&ref_=beagle&applicationId=AWSMPContessa). * Check out [Amazon Bedrock](https://aws.amazon.com/bedrock/?sec=aiapps&pos=2) and [Amazon Bedrock Guardrails](https://aws.amazon.com/bedrock/guardrails/). *** ** * ** *** ## Related Blogs ### [AI Security](https://www.paloaltonetworks.com/blog/category/ai-security/?ts=markdown), [AI Security Posture Management](https://www.paloaltonetworks.com/blog/cloud-security/category/ai-security-posture-management/?ts=markdown), [ASPM](https://www.paloaltonetworks.com/blog/cloud-security/category/aspm/?ts=markdown), [CIEM](https://www.paloaltonetworks.com/blog/cloud-security/category/ciem/?ts=markdown), [DSPM](https://www.paloaltonetworks.com/blog/cloud-security/category/dspm/?ts=markdown) [#### AI-SPM Update: 3 New Capabilities for Model Activity, Agentic AI and Software Supply Chain Risks](https://www.paloaltonetworks.com/blog/cloud-security/aispm-capabilities-enhanced-security/) ### [AI Security](https://www.paloaltonetworks.com/blog/category/ai-security/?ts=markdown), [AI Security Posture Management](https://www.paloaltonetworks.com/blog/cloud-security/category/ai-security-posture-management/?ts=markdown), [DSPM](https://www.paloaltonetworks.com/blog/cloud-security/category/dspm/?ts=markdown) [#### Model Context Protocol (MCP): A Security Overview](https://www.paloaltonetworks.com/blog/cloud-security/model-context-protocol-mcp-a-security-overview/) ### [AI Security](https://www.paloaltonetworks.com/blog/category/ai-security/?ts=markdown), [AI Security Posture Management](https://www.paloaltonetworks.com/blog/cloud-security/category/ai-security-posture-management/?ts=markdown), [Artificial Intelligence](https://www.paloaltonetworks.com/blog/cloud-security/category/artificial-intelligence/?ts=markdown), [Cloud Security](https://www.paloaltonetworks.com/blog/cloud-security/category/cloud-security/?ts=markdown), [CSPM](https://www.paloaltonetworks.com/blog/cloud-security/category/cspm/?ts=markdown) [#### The Rise of AI-Powered IDEs: What the Windsurf Acquisition News Mean for Security Teams](https://www.paloaltonetworks.com/blog/cloud-security/windsurf-openai-acquisition/) ### [AI Security](https://www.paloaltonetworks.com/blog/category/ai-security/?ts=markdown), [AI Security Posture Management](https://www.paloaltonetworks.com/blog/cloud-security/category/ai-security-posture-management/?ts=markdown), [CSPM](https://www.paloaltonetworks.com/blog/cloud-security/category/cspm/?ts=markdown) [#### Complying with OWASP Top 10 for LLM Applications and NIST AI 600-1](https://www.paloaltonetworks.com/blog/cloud-security/ai-application-security-owasp-llm-nist/) ### [AI Security](https://www.paloaltonetworks.com/blog/category/ai-security/?ts=markdown), [AI Security Posture Management](https://www.paloaltonetworks.com/blog/cloud-security/category/ai-security-posture-management/?ts=markdown), [Cloud Security](https://www.paloaltonetworks.com/blog/cloud-security/category/cloud-security/?ts=markdown) [#### Don't Let Inactive AI Models Linger: Reduce Risk and Cost with Cortex Cloud](https://www.paloaltonetworks.com/blog/cloud-security/cloud-security-inactive-ai-model-risk/) ### [AI Security](https://www.paloaltonetworks.com/blog/category/ai-security/?ts=markdown), [AI Security Posture Management](https://www.paloaltonetworks.com/blog/cloud-security/category/ai-security-posture-management/?ts=markdown) [#### Implementing AI Security with Cortex Cloud AI-SPM](https://www.paloaltonetworks.com/blog/cloud-security/implementing-ai-security-with-cortex-cloud-ai-spm/) ### Subscribe to Cloud Security Blogs! Sign up to receive must-read articles, Playbooks of the Week, new feature announcements, and more. ![spinner](https://www.paloaltonetworks.com/blog/wp-content/themes/panwblog2023/dist/images/ajax-loader.gif) Sign up Please enter a valid email. By submitting this form, you agree to our [Terms of Use](https://www.paloaltonetworks.com/legal-notices/terms-of-use?ts=markdown) and acknowledge our [Privacy Statement](https://www.paloaltonetworks.com/legal-notices/privacy?ts=markdown). Please look for a confirmation email from us. If you don't receive it in the next 10 minutes, please check your spam folder. This site is protected by reCAPTCHA and the Google [Privacy Policy](https://policies.google.com/privacy) and [Terms of Service](https://policies.google.com/terms) apply. {#footer} {#footer} ## Products and Services * [AI-Powered Network Security Platform](https://www.paloaltonetworks.com/network-security?ts=markdown) * [Secure AI by Design](https://www.paloaltonetworks.com/precision-ai-security/secure-ai-by-design?ts=markdown) * [Prisma AIRS](https://www.paloaltonetworks.com/prisma/prisma-ai-runtime-security?ts=markdown) * [AI Access Security](https://www.paloaltonetworks.com/sase/ai-access-security?ts=markdown) * [Cloud Delivered Security Services](https://www.paloaltonetworks.com/network-security/security-subscriptions?ts=markdown) * [Advanced Threat Prevention](https://www.paloaltonetworks.com/network-security/advanced-threat-prevention?ts=markdown) * [Advanced URL Filtering](https://www.paloaltonetworks.com/network-security/advanced-url-filtering?ts=markdown) * [Advanced WildFire](https://www.paloaltonetworks.com/network-security/advanced-wildfire?ts=markdown) * [Advanced DNS Security](https://www.paloaltonetworks.com/network-security/advanced-dns-security?ts=markdown) * [Enterprise Data Loss Prevention](https://www.paloaltonetworks.com/sase/enterprise-data-loss-prevention?ts=markdown) * [Enterprise IoT Security](https://www.paloaltonetworks.com/network-security/enterprise-device-security?ts=markdown) * [Medical IoT Security](https://www.paloaltonetworks.com/network-security/medical-device-security?ts=markdown) * [Industrial OT Security](https://www.paloaltonetworks.com/network-security/medical-device-security?ts=markdown) * [SaaS Security](https://www.paloaltonetworks.com/sase/saas-security?ts=markdown) * [Next-Generation Firewalls](https://www.paloaltonetworks.com/network-security/next-generation-firewall?ts=markdown) * [Hardware Firewalls](https://www.paloaltonetworks.com/network-security/hardware-firewall-innovations?ts=markdown) * [Software Firewalls](https://www.paloaltonetworks.com/network-security/software-firewalls?ts=markdown) * [Strata Cloud Manager](https://www.paloaltonetworks.com/network-security/strata-cloud-manager?ts=markdown) * [SD-WAN for NGFW](https://www.paloaltonetworks.com/network-security/sd-wan-subscription?ts=markdown) * [PAN-OS](https://www.paloaltonetworks.com/network-security/pan-os?ts=markdown) * [Panorama](https://www.paloaltonetworks.com/network-security/panorama?ts=markdown) * [Secure Access Service Edge](https://www.paloaltonetworks.com/sase?ts=markdown) * [Prisma SASE](https://www.paloaltonetworks.com/sase?ts=markdown) * [Application Acceleration](https://www.paloaltonetworks.com/sase/app-acceleration?ts=markdown) * [Autonomous Digital Experience Management](https://www.paloaltonetworks.com/sase/adem?ts=markdown) * [Enterprise DLP](https://www.paloaltonetworks.com/sase/enterprise-data-loss-prevention?ts=markdown) * [Prisma Access](https://www.paloaltonetworks.com/sase/access?ts=markdown) * [Prisma Browser](https://www.paloaltonetworks.com/sase/prisma-browser?ts=markdown) * [Prisma SD-WAN](https://www.paloaltonetworks.com/sase/sd-wan?ts=markdown) * [Remote Browser Isolation](https://www.paloaltonetworks.com/sase/remote-browser-isolation?ts=markdown) * [SaaS Security](https://www.paloaltonetworks.com/sase/saas-security?ts=markdown) * [AI-Driven Security Operations Platform](https://www.paloaltonetworks.com/cortex?ts=markdown) * [Cloud Security](https://www.paloaltonetworks.com/cortex/cloud?ts=markdown) * [Cortex Cloud](https://www.paloaltonetworks.com/cortex/cloud?ts=markdown) * [Application Security](https://www.paloaltonetworks.com/cortex/cloud/application-security?ts=markdown) * [Cloud Posture Security](https://www.paloaltonetworks.com/cortex/cloud/cloud-posture-security?ts=markdown) * [Cloud Runtime Security](https://www.paloaltonetworks.com/cortex/cloud/runtime-security?ts=markdown) * [Prisma Cloud](https://www.paloaltonetworks.com/prisma/cloud?ts=markdown) * [AI-Driven SOC](https://www.paloaltonetworks.com/cortex?ts=markdown) * [Cortex XSIAM](https://www.paloaltonetworks.com/cortex/cortex-xsiam?ts=markdown) * [Cortex XDR](https://www.paloaltonetworks.com/cortex/cortex-xdr?ts=markdown) * [Cortex XSOAR](https://www.paloaltonetworks.com/cortex/cortex-xsoar?ts=markdown) * [Cortex Xpanse](https://www.paloaltonetworks.com/cortex/cortex-xpanse?ts=markdown) * [Unit 42 Managed Detection \& Response](https://www.paloaltonetworks.com/cortex/managed-detection-and-response?ts=markdown) * [Managed XSIAM](https://www.paloaltonetworks.com/cortex/managed-xsiam?ts=markdown) * [Threat Intel and Incident Response Services](https://www.paloaltonetworks.com/unit42?ts=markdown) * [Proactive Assessments](https://www.paloaltonetworks.com/unit42/assess?ts=markdown) * [Incident Response](https://www.paloaltonetworks.com/unit42/respond?ts=markdown) * [Transform Your Security Strategy](https://www.paloaltonetworks.com/unit42/transform?ts=markdown) * [Discover Threat Intelligence](https://www.paloaltonetworks.com/unit42/threat-intelligence-partners?ts=markdown) ## Company * [About Us](https://www.paloaltonetworks.com/about-us?ts=markdown) * [Careers](https://jobs.paloaltonetworks.com/en/) * [Contact Us](https://www.paloaltonetworks.com/company/contact-sales?ts=markdown) * [Corporate Responsibility](https://www.paloaltonetworks.com/about-us/corporate-responsibility?ts=markdown) * [Customers](https://www.paloaltonetworks.com/customers?ts=markdown) * [Investor Relations](https://investors.paloaltonetworks.com/) * [Location](https://www.paloaltonetworks.com/about-us/locations?ts=markdown) * [Newsroom](https://www.paloaltonetworks.com/company/newsroom?ts=markdown) ## Popular Links * [Blog](https://www.paloaltonetworks.com/blog/?ts=markdown) * [Communities](https://www.paloaltonetworks.com/communities?ts=markdown) * [Content Library](https://www.paloaltonetworks.com/resources?ts=markdown) * [Cyberpedia](https://www.paloaltonetworks.com/cyberpedia?ts=markdown) * [Event Center](https://events.paloaltonetworks.com/) * [Manage Email Preferences](https://start.paloaltonetworks.com/preference-center) * [Products A-Z](https://www.paloaltonetworks.com/products/products-a-z?ts=markdown) * [Product Certifications](https://www.paloaltonetworks.com/legal-notices/trust-center/compliance?ts=markdown) * [Report a Vulnerability](https://www.paloaltonetworks.com/security-disclosure?ts=markdown) * [Sitemap](https://www.paloaltonetworks.com/sitemap?ts=markdown) * [Tech Docs](https://docs.paloaltonetworks.com/) * [Unit 42](https://unit42.paloaltonetworks.com/) * [Do Not Sell or Share My Personal Information](https://panwedd.exterro.net/portal/dsar.htm?target=panwedd) ![PAN logo](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/pan-logo-dark.svg) * [Privacy](https://www.paloaltonetworks.com/legal-notices/privacy?ts=markdown) * [Trust Center](https://www.paloaltonetworks.com/legal-notices/trust-center?ts=markdown) * [Terms of Use](https://www.paloaltonetworks.com/legal-notices/terms-of-use?ts=markdown) * [Documents](https://www.paloaltonetworks.com/legal?ts=markdown) Copyright © 2026 Palo Alto Networks. All Rights Reserved * [![Youtube](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/social/youtube-black.svg)](https://www.youtube.com/user/paloaltonetworks) * [![Podcast](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/icons/podcast.svg)](https://www.paloaltonetworks.com/podcasts/threat-vector?ts=markdown) * [![Facebook](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/social/facebook-black.svg)](https://www.facebook.com/PaloAltoNetworks/) * [![LinkedIn](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/social/linkedin-black.svg)](https://www.linkedin.com/company/palo-alto-networks) * [![Twitter](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/social/twitter-x-black.svg)](https://twitter.com/PaloAltoNtwks) * EN Select your language