Anthropic users face a new choice – opt out or share your data for AI training

Anthropic is implementing significant changes to its user data policy. All Claude users must decide by September 28 whether they want their conversations used to train AI models. Previously, Anthropic did not use consumer chat data for model training. Now, the company intends to train its AI systems on user conversations and coding sessions. It also announced an extension of data retention to five years for users who do not opt out.

This marks a major shift. Users of Anthropic’s consumer products were previously told their prompts and conversation outputs would be automatically deleted from the backend within 30 days unless legally required to be kept longer or if the input was flagged for violating policies. In those cases, data might be retained for up to two years.

These new policies apply to Claude Free, Pro, and Max users, including those using Claude Code. Business customers using Claude Gov, Claude for Work, Claude for Education, or API access will be unaffected. This mirrors how OpenAI similarly protects its enterprise customers from data training policies.

Anthropic frames the changes around user choice. The company states that by not opting out, users will help improve model safety by making systems for detecting harmful content more accurate and less likely to flag harmless conversations. Users will also help future Claude models improve at skills like coding, analysis, and reasoning, ultimately leading to better models for everyone.

However, the full truth is likely less selfless. Like every other large language model company, Anthropic needs vast amounts of high-quality conversational data. Accessing millions of Claude interactions provides the real-world content necessary to improve its competitive positioning against rivals like OpenAI and Google.

Beyond competitive pressures, these changes reflect broader industry shifts in data policies. Companies like Anthropic and OpenAI face increasing scrutiny over their data retention practices. For instance, OpenAI is currently fighting a court order that forces it to retain all consumer ChatGPT conversations indefinitely, including deleted chats, due to a lawsuit filed by The New York Times and other publishers. An OpenAI executive called this a sweeping and unnecessary demand that conflicts with privacy commitments made to users.

What is alarming is the confusion these changing policies create for users, many of whom remain unaware of them. While technology evolves quickly and privacy policies are bound to change, many of these updates are sweeping and mentioned only briefly amid other company news.

Many users do not realize the guidelines they agreed to have changed because the design practically guarantees it. Most ChatGPT users keep clicking delete toggles that are not technically deleting anything. Anthropic’s implementation follows a familiar pattern. New users choose their preference during signup, but existing users face a pop-up with large text announcing updates and a prominent black Accept button. A much smaller toggle switch for training permissions is below in smaller print, automatically set to On by default. This design raises concerns that users might quickly click Accept without noticing they are agreeing to data sharing.

The stakes for user awareness are high. Privacy experts have long warned that the complexity surrounding AI makes meaningful user consent nearly unattainable. The Federal Trade Commission under the Biden Administration warned that AI companies risk enforcement action if they surreptitiously change terms of service or bury disclosures in fine print. Whether the commission, now operating with only three of its five commissioners, is still focused on these practices remains an open question.