ChatGPT Atlas breach: Hidden commands that follow you even after logout

Cybersecurity firm LayerX has uncovered a serious flaw in OpenAI’s ChatGPT Atlas browser that allows hackers to inject hidden code into memory, risking user data and browser security.

By Surjosnata Chatterjee

Oct 28, 2025 13:51 IST

Millions of users are concerned about the recent examination of OpenAI's ChatGPT Atlas browser after cybersecurity company LayerX Security revealed a significant vulnerability that might enable hackers to secretly take over systems.

The vulnerability, described by experts as “deeply dangerous,” as it let attackers plant malicious instructions inside the AI browser’s memory, which are the commands that can survive even after a user logs out or switches devices.

“This bug allows hackers to inject code that stays hidden, letting them control browsers or install malware without users realising it,” said Or Eshed, co-founder and CEO of LayerX Security, in a report shared by The Hacker News.

Also Read: Wanna know how Indian students really use ChatGPT: OpenAI reveals multiple prompts from IIT students and more

When AI memory becomes a threat

The exploit takes advantage of a cross-site request forgery (CSRF) weakness. In simple words, if a logged-in ChatGPT user clicks on a malicious link even unknowingly, then the attacker can silently slip code into the browser’s long-term memory.

That code doesn’t disappear. Instead, it lies in wait. The next time the user opens ChatGPT and asks a normal question, those hidden commands could activate, allowing hackers to run their own code, steal data, or manipulate systems.

“What makes this exploit uniquely dangerous is that it targets the AI's persistent memory, not just the browser session,” said Michelle Levy, head of security research at LayerX. “By chaining a standard CSRF to a memory write, an attacker can invisibly plant instructions that survive across devices, sessions, and even different browsers.”

A risk that travels with you

OpenAI launched ChatGPT’s memory feature in 2024 to make the chatbot more helpful by remembering names, topics, and preferences across sessions. But the same feature now seems to be a double-edged sword. Once “tainted,” that memory doesn’t reset unless users manually delete it from settings.

Also Read: Microsoft Teams will soon know if you’re at office and yes your boss will too

LayerX researchers found that once the memory is compromised, the malicious code can trigger automatically during regular use. In one simulation, ChatGPT generated hidden code snippets during what looked like a routine programming task.

The firm tested ChatGPT Atlas against over a hundred phishing and web attacks. The results were alarming as Atlas blocked just 5.8% of threats, compared to Chrome’s 47% and Edge’s 53%.

A new kind of supply chain risk

Security researchers warn that as AI browsers integrate chat, identity, and productivity tools, they’re becoming prime targets. “AI browsers are now the new supply chain,” Eshed said. “Once compromised, they carry the infection from one session to another and even across devices.”

The report follows another alarming discovery by NeuralTrust, which demonstrated that Atlas’s address bar could be manipulated with a disguised prompt, effectively tricking the AI into executing malicious commands. So far, OpenAI has not commented on the latest findings. Experts, however, advise users to turn off the memory feature for now and stay alert against suspicious links.

As AI becomes the centre of everyday digital work, this breach serves as a reminder: when the line between human input and machine intelligence blurs, even convenience can turn into a hidden threat.

Prev Article
US President Donald Trump hits Russian oil giants, says PM Modi vowed to cut Moscow crude
Next Article
IC attacked in violent clash over land dispute in Murshidabad, 2 cops injured

Articles you may like: