Facebook Google Plus Twitter LinkedIn YouTube RSS Menu Search Resource - BlogResource - WebinarResource - ReportResource - Eventicons_066 icons_067icons_068icons_069icons_070

OpenAI ChatGPT “Command Memories” Injection via SearchGPT

Medium

Synopsis

Tenable researchers discovered that an attacker could use indirect prompt injection in ChatGPT to insert "Command Memories" which enable the exfiltration of user memories, potentially including PII, from victims. This vulnerability remains unpatched. The researchers identified a new technique that involves injecting a malicious prompt into sites that allow user input by default, such as news sites and blogs with comment sections.

ChatGPT uses its web tool and operates another LLM, SearchGPT, to search and browse the URL the user requests. This isolation is presumably intended to reduce the risk of indirect prompt injection via browsing, as SearchGPT cannot access the user’s memories. The researchers discovered that this mechanism could be bypassed by asking SearchGPT to print instructions to ChatGPT. Using this method, memories could be both injected and exfiltrated.

Therefore, an attacker could plant indirect prompt injections in comments of various popular sites and articles, and if a victim were to ask ChatGPT to summarize one of them, the user’s memories could be compromised. 

Solution

A solution has yet to be deployed.

Disclosure Timeline

December 23, 2024 - Tenable requests security contact from vendor
December 23, 2024 - Tenable gets auto-reply referring to BugCrowd for reporting a vulnerability
December 30, 2024 - OpenAI asks for the tech details of the support ticket
December 30, 2024 - Tenable clarifies that this is a security issue and we need a contact to report a vulnerability
December 30, 2024 - OpenAI refers us once again to their BugCrowd page as the proper way to report a vulnerability in their platform
December 30, 2024 - Tenable clarifies we cannot use this platform and ask for another way to report a vulnerability
December 30, 2024 - OpenAI refers us once again to their BugCrowd page and says that "by participating in our Bug Bounty Program is the only way for us to assist you"
December 30, 2024 - An OpenAI staff member responds indicating we are using the correct address.
December 30, 2024 - Tenable provides full vulnerability report
January 3, 2025 - OpenAI adds another person to the conversation
January 12, 2025 - Tenable asks for a status update
January 26, 2025 - Tenable asks for a status update
February 9, 2025 - Tenable asks for a status update
February 23, 2025 - Tenable asks for a status update
March 10, 2025 - Tenable asks for a status update
March 16, 2025 - Tenable asks for a status update

All information within TRA advisories is provided “as is”, without warranty of any kind, including the implied warranties of merchantability and fitness for a particular purpose, and with no guarantee of completeness, accuracy, or timeliness. Individuals and organizations are responsible for assessing the impact of any actual or potential security vulnerability.

Tenable takes product security very seriously. If you believe you have found a vulnerability in one of our products, we ask that you please work with us to quickly resolve it in order to protect customers. Tenable believes in responding quickly to such reports, maintaining communication with researchers, and providing a solution in short order.

For more details on submitting vulnerability information, please see our Vulnerability Reporting Guidelines page.

If you have questions or corrections about this advisory, please email [email protected]

Risk Information

Tenable Advisory ID: TRA-2025-11
Credit:
Yarden Curiel
Moshe Bernstein
Liv Matan
Affected Products:
ChatGPT 4o With Search (SearchGPT)
Risk Factor:
Medium

Advisory Timeline

March 24, 2025 - Initial release.