New AI-powered web browsers such as OpenAI’s ChatGPT Atlas and Perplexity’s Comet are trying to unseat Google Chrome as the front door to the internet for billions of users. A key selling point of these products is their web browsing AI agents, which promise to complete tasks on a user’s behalf by clicking around on websites and filling out forms.
But consumers may not be aware of the major risks to user privacy that come along with agentic browsing, a problem the entire tech industry is trying to address. Cybersecurity experts say AI browser agents pose a larger risk to user privacy compared to traditional browsers. They advise consumers to consider how much access they give web browsing AI agents, and whether the purported benefits outweigh the risks.
To be most useful, AI browsers like Comet and ChatGPT Atlas ask for a significant level of access, including the ability to view and take action in a user’s email, calendar, and contact list. In testing, the agents in Comet and ChatGPT Atlas are moderately useful for simple tasks, especially when given broad access. However, the version of web browsing AI agents available today often struggle with more complicated tasks and can take a long time to complete them. Using them can feel more like a neat party trick than a meaningful productivity booster. All that access comes at a cost.
The main concern with AI browser agents is around prompt injection attacks, a vulnerability that can be exposed when bad actors hide malicious instructions on a webpage. If an agent analyzes that web page, it can be tricked into executing commands from an attacker. Without sufficient safeguards, these attacks can lead browser agents to unintentionally expose user data, such as their emails or logins, or take malicious actions on behalf of a user, such as making unintended purchases or social media posts.
Prompt injection attacks are a phenomenon that has emerged in recent years alongside AI agents, and there is not a clear solution to preventing them entirely. With the launch of ChatGPT Atlas, it seems likely that more consumers than ever will soon try out an AI browser agent, and their security risks could soon become a bigger problem.
Brave, a privacy and security-focused browser company, released research this week determining that indirect prompt injection attacks are a systemic challenge facing the entire category of AI-powered browsers. Brave researchers previously identified this as a problem facing Perplexity’s Comet, but now say it is a broader, industry-wide issue. A senior engineer at Brave stated that while there is a huge opportunity to make life easier for users, having the browser act on your behalf is fundamentally dangerous and represents a new line in browser security.
OpenAI’s Chief Information Security Officer acknowledged the security challenges with launching agent mode, the agentic browsing feature in ChatGPT Atlas. He noted that prompt injection remains a frontier, unsolved security problem, and that adversaries will spend significant time and resources to find ways to make ChatGPT agents fall for these attacks.
Perplexity’s security team also published a blog post on prompt injection attacks, noting that the problem is so severe that it demands rethinking security from the ground up. The blog stated that prompt injection attacks manipulate the AI’s decision-making process itself, turning the agent’s capabilities against its user.
OpenAI and Perplexity have introduced a number of safeguards which they believe will mitigate the dangers of these attacks. OpenAI created a logged out mode, in which the agent will not be logged into a user’s account as it navigates the web. This limits the browser agent’s usefulness, but also how much data an attacker can access. Meanwhile, Perplexity says it built a detection system that can identify prompt injection attacks in real time.
While cybersecurity researchers commend these efforts, they do not guarantee that the web browsing agents are bulletproof against attackers, nor do the companies make that claim. The Chief Technology Officer of McAfee explained that the root of prompt injection attacks seems to be that large language models are not great at understanding where instructions are coming from. He says there is a loose separation between the model’s core instructions and the data it is consuming, which makes it difficult for companies to stomp out this problem entirely. He described it as a cat and mouse game, with a constant evolution of both the attacks and the defense techniques.
Grobman says prompt injection attacks have already evolved quite a bit. The first techniques involved hidden text on a web page. But now, prompt injection techniques have advanced, with some relying on images with hidden data representations to give AI agents malicious instructions.
There are a few practical ways users can protect themselves while using AI browsers. The CEO of a security awareness training firm says user credentials for AI browsers are likely to become a new target for attackers. She says users should ensure they are using unique passwords and multi-factor authentication for these accounts.
She also recommends users consider limiting what these early versions of ChatGPT Atlas and Comet can access, and siloing them from sensitive accounts related to banking, health, and personal information. Security around these tools will likely improve as they mature, and she recommends waiting before giving them broad control.

