OpenAI fastened a vulnerability that would have allowed attackers to steal delicate info via ChatGPT’s Deep Analysis agent.
Deep Analysis, a software unveiled by OpenAI in February, permits customers to ask ChatGPT to browse the web — or your private e-mail inbox — and generate an in depth report on its findings. The software will be built-in with functions like Gmail and GitHub, permitting folks to do deep dives into their very own private paperwork.
Cybersecurity agency Radware found a vulnerability they name “ShadowLeak” — the place researchers Gabi Nakibly, Zvika Babo and Maor Uziel demonstrated that an attacker might exploit the vulnerability by merely sending an e-mail to the person.
When somebody asks Deep Analysis to “summarize at the moment’s emails” or “analysis my inbox a couple of subject,” the agent ingests the booby‑trapped message and, with out additional person interplay, exfiltrates delicate knowledge by calling an attacker‑managed URL with non-public parameters like names, addresses or inner and delicate info.
As soon as the AI agent interacts with the malicious e-mail, delicate knowledge was extracted with out victims ever viewing, opening or clicking the message.
“That is the quintessential zero-click assault,” mentioned David Aviv, chief know-how officer at Radware. “There isn’t any person motion required, no seen cue and no method for victims to know their knowledge has been compromised. All the pieces occurs completely behind the scenes via autonomous agent actions on OpenAI cloud servers.”
A Radware spokesperson mentioned they didn’t see the vulnerability actively exploited within the wild.
Radware disclosed the bug to OpenAI on June 18 via vulnerability reporting platform BugCrowd. By early August, OpenAI mentioned the vulnerability was fastened and the corporate marked it as resolved on September 3.
A spokesperson for OpenAI confirmed to Recorded Future Information that the bug was reported to them via their bug bounty program.
“It’s crucial to us that we develop our fashions safely. We take steps to cut back the danger of malicious use, and we’re frequently enhancing safeguards to make our fashions extra sturdy towards exploits like immediate injections,” the OpenAI spokesperson mentioned. “Researchers usually check these techniques in adversarial methods, and we welcome their analysis because it helps us enhance.”
Zero clicks
Nakibly and Babo mentioned in a report on the bug that it leaves no community degree proof, “making these threats practically unattainable to detect from the attitude of the ChatGPT enterprise buyer.”
The scheme will be hidden in emails with tiny fonts, white-on-white textual content or different structure methods that make it so victims by no means see the instructions however the agent nonetheless reads and obeys it.
Nakibly and Babo mentioned the assault begins with a menace actor sending an innocent-looking e-mail titled “Restructuring Package deal – Motion Gadgets.” Contained in the physique of the e-mail, directions written in white coloring inform Deep Analysis to search out the worker’s full title and tackle within the inbox and open a so-called public worker lookup URL that factors to an attacker-controlled server.
“The e-mail incorporates a number of social engineering methods to bypass the agent’s security coaching and its reluctance to ship PII to a beforehand unknown URL,” the researchers mentioned.
The attackers might additionally painting their server as a “compliance validation system” to make the request sound respectable. The immediate additionally overrides security checks by asserting that the information is public.
Nakibly and Babo demonstrated the assault via Deep Analysis’s Gmail integration as a result of it is likely one of the most generally used connectors.
However they famous that the assault could possibly be used on all kinds of exterior sources together with Google Drive, Dropbox, Sharepoint and extra.
Any connector that ingested structured or semi-structured textual content into the agent created a possible immediate injection vector, they defined, noting that Gmail served as a simple instance, however the identical method “will be utilized to those further connectors to exfiltrate extremely delicate enterprise knowledge similar to contracts, assembly notes or buyer data.”
“From the surface, the visitors seems to be like sanctioned assistant exercise. From the within, guardrails centered on secure output don’t catch what actually issues right here — covert, tool-driven actions,” they mentioned.
Researchers have spent years uncovering prompts that enable them to abuse OpenAI instruments to create malware and phishing emails. ShadowLeak stood out to the researchers as a result of it’s a part of an rising class of exploits impacting autonomous instruments hooked up to knowledge sources.
Recorded Future
Intelligence Cloud.
Study extra.




















