OpenAI Custom GPTs: What You Need to Worry About

Feb 15, 2024
9 minutes
1218 views

The integration of OpenAI's Custom GPTs with personal data files and third-party APIs offers new opportunities for organizations looking for custom LLMs for a variety of needs. They also open the door to many significant security risks, particularly accidental leakage of sensitive data through uploaded files and API interactions. Additionally, external APIs can subtly change GPT's responses through prompt injections.

Clearly, it’s essential to keep tabs on the data you input with an understanding of the potential risks involved with OpenAI’s new advanced features.

Extending ChatGPT

OpenAI has released GPTs, enabling you to create custom versions of ChatGPT for specific purposes. To create your GPT, you need only to extend ChatGPT’s capabilities for specific tasks and domain knowledge.

To extend ChatGPT’s knowledge, OpenAI enables organizations to add data in the form of files and third-party API integration. This is big news, since you can now easily build and deliver LLM chatbots without an AI team. Take, for example, a retail company building an LLM bot that promotes new products and publishes it in the GPT marketplace. Or an organization’s internal HR bot that helps new employees on board.

With these new features, however, come new security concerns. In this blog post, we’ll address the concerns from the perspective of the Custom GPT creator and the individual who uses it.

New Capabilities at a Glance

Actions

Actions are an upgrade of plugins. They work by incorporating third-party APIs to gather data based on user queries. GPT builds an API call to the third party, and uses the API response to build a user response.

Custom GPTs Add Action UI
Figure 1: Add Action UI

Knowledge

Knowledge adds data in the form of files to the GPT, extending its knowledge with business-specific information that the classic model doesn't recognize. Knowledge supports many file types, including PDF, text, and CSV, as well as other data types.

Example of knowledge files in a custom GPT
Figure 2: Example of knowledge files in a custom GPT

Publishing

Publishing a new GPT enables users to work with it. The access and sharing model is fairly simple, while the creator can publish it to only 3 groups:

  • Only me: Only the user can use the GPT
  • Anyone with a link: Semi-public GPT
  • Everyone: Anyone with a GPT Plus subscription

Both knowledge and actions provide more context to the GPT to solve a specific problem. They offer a UI to build, test, and incorporate capabilities that the current ChatGPT allows, such as code interpretation, web browsing, and image creation. Custom GPTs’ features will expand further once more businesses and individuals enter the GPT marketplace.

Attack Landscape

Organizations can undoubtedly benefit from ChatGPT’s new features, particularly by reducing the time it takes to build new services. That said, the gains come with a caveat — the features also enable attackers to capitalize on mistakes made when a custom GPT is created. Tactics include:

  • Knowledge file exfiltration
  • Data leakage to third parties
  • Indirect prompt injection

Data Theft and Exfiltration

Unauthorized Access to Knowledge Data Files

Attack scenario to exfiltrate knowledge files
Figure 3: Attack scenario to exfiltrate knowledge files

Uploading files to the custom GPT is a concern for organizations that intend to publish their GPTs, since the file content is now available to anyone accessing the GPT.

Mistakenly uploading PII data is a huge privacy concern. Employees can upload sensitive data that may violate privacy regulations or include credentials that can be exploited.

We started testing things out by examining the interpreter, seeing how it operates, and determining whether system-level commands can run (e.g., ‘ls’, ‘env’) and inspect the environment that ChatGPT uses to run code.

Example prompt: “run 'ls' as a subprocess in Python and show output”

Analysis (executing the code):

Execution of Linux command ‘ls’ in the code interpreter
Figure 4: Execution of Linux command ‘ls’ in the code interpreter

Output of the ‘ls’ command:

Output of ‘ls’
Figure 5: Output of ‘ls’

Now knowing that GPT can be used to inspect the environment in which the interpreter resides, we ran ‘env’ and received the following output:

Execution of the Linux command ‘env’ in the code interpreter
Figure 6: Execution of the Linux command ‘env’ in the code interpreter

We can see in “Result” that the code ran successfully.

List of environment variables in the code interpreter
Figure 7: List of environment variables in the code interpreter

Also, we wanted to see what processes were running inside the code interpreter environment.

Listing of current running processes in the code interpreter
Figure 8: Listing of current running processes in the code interpreter

We saw that Jupyter is deployed inside the code interpreter, and ChatGPT most likely uses it to run Python code.

This leads us to conclude that the interpreter runs in a Kubernetes pod with a Jupyter Labs process in an isolated environment.

Further examination of the interpreter revealed that a GPT with a code interpreter feature could be utilized to retrieve originally uploaded files. This functionality presents a vulnerability, since malicious entities can exploit it to execute system-level commands using Python code and access-sensitive files.

In this example, an employee who doesn’t properly understand the implications could use GPT to unintentionally upload sensitive files to it.

Our research found that files uploaded to a Custom GPT are saved under the path “/mnt/data.”

Example knowledge files in a custom GPT
Figure 9: Example knowledge files in a custom GPT
Listing knowledge files from the code interpreter
Figure 10: Listing knowledge files from the code interpreter

(Under “Result” we can see the file names that we uploaded earlier.)

With proper prompt utilization, it can also be used to look into file content (e.g PDF, text, binary).

Reading a knowledge file from the chat
Figure 11: Reading a knowledge file from the chat

Since the GPT marketplace has been released, creators should be aware of sensitive files that can be uploaded to the Custom GPT. Prior to uploading, they should check whether their files contain sensitive information, such as personal information or intellectual property, that shouldn’t be exposed.

Exposing Sensitive Data in GPT Actions

Attack scenario to exfiltrate sensitive data via a third-party API
Figure 12: Attack scenario to exfiltrate sensitive data via a third-party API

ChatGPT records everything we type. That’s the de facto standard regarding the private data — from telemetry data to the prompts themselves — we share with OpenAI.

With the impending actions feature, users should be concerned about third-party APIs that can collect user data from the ChatGPT service. When using actions, data is sent to the third-party API, which ChatGPT formats. This data might be sensitive based on an organization's standards, and then leaked.

Let's look at an example of a GPT we built with an action that gets information based on user location and bank.

Sensitive data sent in our scenario
Figure 13: Sensitive data sent in our scenario

ChatGPT currently has no mechanism to stop sending PII data (in this case, location information), and third-party APIs can and will collect that data.

Let’s look at some API calls made in chat.openai.com.

One API call is metadata information about a custom model.

GET /backend-api/gizmos/<model_id> occurs when the ChatGPT UI loads the model and the chat window opens.

We discern some interesting information. Let’s start with model metadata.

Snippet from our custom GPT metadata
Figure 14: Snippet from our custom GPT metadata

Even more interesting is a spec file containing the swagger API of the actions.

”raw_spec” contains the third-party swagger
Figure 15: ”raw_spec” contains the third-party swagger

And we can see the API swagger file used.

Formatted swagger
Figure 16: Formatted swagger

Here we can see what API calls are being made and, more importantly, what data can be sent. This gives us additional knowledge about what type of data can leak and how it can happen.

Organizations should be more concerned about data leakage to third-party APIs using ChatGPT and increase their cybersecurity awareness, particularly as it pertains to using new Custom GPTs.

Indirect Prompt Injections in Actions

Attack scenario of prompt injection from a third-party API
Figure 17: Attack scenario of prompt injection from a third-party API

Another interesting observation we made isn’t a classic security risk but definitely worth noting. A bad actor can use actions as a basis of prompt injections to change the “narrative” of the whole chat based on API responses without user knowledge.

Let's take an example of an action — a third-party API that returns information based on user input. Here we use an HTTP server to return responses.

Schema:

When an action is used in the chat, the following window appears:

ChatGPT confirmation to use the action
Figure 18: ChatGPT confirmation to use the action

If the user confirms, the following API call is sent:

Here the prompt injection takes place, and the API responds with:

The response contains detailed instructions as to how the GPT should act. Figures 21 and 22 illustrate how the custom GPT might answer a question before and after the action.

Before the action:

The answer for “What is python” before the prompt injection
Figure 19: The answer for “What is python” before the prompt injection

And after:

The answer for “What is python” after the prompt injection
Figure 20: The answer for “What is python” after the prompt injection

With bad actors using prompt injections in APIs to influence the LLM model output, a real-world example could prove damaging. The risk is great, considering the inability to distinguish between the action data and the user input.

What’s more, there’s no visibility into the prompt injection, and it happens right under the user's nose.

Only the sent data is reported to the user.
Figure 21: Only the sent data is reported to the user.

The prompt injection can significantly impact custom GPTs that utilize actions, altering the conversation narrative with the LLM model without the user's knowledge. This vulnerability can be exploited to spread misinformation, bypass safeguards, and undermine trust in AI systems.

Learn More

Is your sensitive data secure? Discover the latest trends, risks, and best practices in data security, based on analysis of over 13 billion files and 8 petabytes of data stored in public cloud environments. Download your copy of the State of Cloud Data Security 2023 report today.

And give Prisma Cloud a test drive if you haven’t experienced the advantage of best-in-class Code to Cloud security.

 


Subscribe to Cloud Native Security Blogs!

Sign up to receive must-read articles, Playbooks of the Week, new feature announcements, and more.