Hidden Threat: How ChatGPT Atlas Browser Can Be Hijacked
Mon Oct 27 2025
Advertisement
Advertisement
A serious security flaw has been found in the ChatGPT Atlas browser. This flaw lets hackers sneak harmful commands into the AI's memory. Once there, these commands can stay hidden and keep working even after the user logs out and logs back in.
This attack works by tricking users into clicking on a bad link. Once clicked, the link can inject harmful instructions into the AI's memory. These instructions can then be used to take control of a user's account, browser, or even other connected devices. The worst part? Users won't even know it's happening.
The AI's memory feature, meant to make interactions more personal, is what makes this attack so dangerous. Normally, memory helps the AI remember things like a user's name or preferences. But in this case, it lets harmful commands stick around until the user manually deletes them.
Experts say this flaw is especially tricky because it targets the AI's memory, not just the browser session. This means the harmful commands can survive across different devices, sessions, and even browsers. In tests, once the AI's memory was infected, normal prompts could trigger harmful actions without setting off any alarms.
This isn't the only problem with the ChatGPT Atlas browser. It also lacks strong anti-phishing controls, making users up to 90% more vulnerable compared to traditional browsers like Google Chrome or Microsoft Edge. In tests, Edge stopped 53% of malicious web pages, Chrome stopped 47%, and Dia stopped 46%. In contrast, Perplexit's Comet and ChatGPT Atlas only stopped 7% and 5. 8%, respectively.
This flaw opens the door to many attack scenarios. For example, a developer asking ChatGPT to write code could end up with hidden harmful instructions slipped into the code. As AI browsers become more common, they are becoming a popular way for hackers to steal data in business environments.
Experts warn that AI browsers are combining apps, identities, and intelligence into a single target for attacks. Flaws like this are becoming the new supply chain for harmful activities. They travel with the user, contaminate future work, and blur the line between helpful AI and covert control.
As browsers become the main way to interact with AI, businesses need to start treating them as critical infrastructure. This is the next frontier for AI productivity and work.
https://localnews.ai/article/hidden-threat-how-chatgpt-atlas-browser-can-be-hijacked-7ae5e8aa
continue reading...
actions
flag content