Anthropic has launched a research preview of a browser-based AI agent powered by its Claude AI models. The new agent, Claude for Chrome, is initially rolling out to a group of 1,000 subscribers on Anthropic’s Max plan, which costs between one hundred and two hundred dollars per month. The company is also opening a waitlist for other interested users.
By adding an extension to Chrome, selected users can now chat with Claude in a sidecar window that maintains context of everything happening in their browser. Users can also give the Claude agent permission to take actions in their browser and complete certain tasks on their behalf.
The browser is quickly becoming the next battleground for AI labs, which aim to use browser integrations to offer more seamless connections between AI systems and their users. Perplexity recently launched its own browser, Comet, which features an AI agent that can offload tasks for users. OpenAI is reportedly close to launching its own AI-powered browser, which is rumored to have similar features to Comet. Meanwhile, Google has launched Gemini integrations with Chrome in recent months.
The race to develop AI-powered browsers is especially pressing given Google’s looming antitrust case, in which a final decision is expected any day now. The federal judge in the case has suggested he may force Google to sell its Chrome browser. Perplexity submitted an unsolicited 34.5 billion dollar offer for Chrome, and OpenAI CEO Sam Altman suggested his company would be willing to buy it as well.
In the announcement, Anthropic warned that the rise of AI agents with browser access poses new safety risks. Last week, Brave’s security team said it found that Comet’s browser agent could be vulnerable to indirect prompt-injection attacks, where hidden code on a website could trick the agent into executing malicious instructions when it processed the page. Perplexity’s head of communications stated that the vulnerability Brave raised has been fixed.
Anthropic says it hopes to use this research preview as a chance to catch and address novel safety risks. The company has already introduced several defenses against prompt injection attacks. The company says its interventions reduced the success rate of prompt injection attacks from 23.6 percent to 11.2 percent.
For example, Anthropic says users can limit Claude’s browser agent from accessing certain sites in the app’s settings. The company has, by default, blocked Claude from accessing websites that offer financial services, adult content, and pirated content. The company also says that Claude’s browser agent will ask for user permission before taking high-risk actions like publishing, purchasing, or sharing personal data.
This is not Anthropic’s first foray into AI models that can control your computer screen. In October 2024, the company launched an AI agent that could control your PC. However, testing at the time revealed that the model was quite slow and unreliable.
The capabilities of agentic AI models have improved quite a bit since then. Modern browser-using AI agents, such as Comet and ChatGPT Agent, are fairly reliable at offloading simple tasks for users. However, many of these agentic systems still struggle with more complex problems.