Anthropic is implementing significant changes to its user data policies. All Claude users must decide by September 28 whether they want their conversations used to train AI models. Previously, Anthropic did not use consumer chat data for model training. The company now wants to train its AI systems on user conversations and coding sessions. It is also extending data retention to five years for those who do not opt out.
This represents a major shift. Users of Anthropic’s consumer products were previously told their prompts and conversations would be automatically deleted within 30 days unless legally required to be kept longer. Inputs flagged for policy violations might be retained for up to two years.
These new policies apply to Claude Free, Pro, and Max users, including those using Claude Code. Business customers using Claude Gov, Claude for Work, Claude for Education, or API access will be unaffected. This mirrors how OpenAI similarly protects its enterprise customers from data training policies.
Anthropic frames the changes around user choice. The company states that by not opting out, users will help improve model safety and make systems for detecting harmful content more accurate. Users will also help future Claude models improve at skills like coding, analysis, and reasoning, ultimately leading to better models for everyone.
However, the full truth is likely less selfless. Like every other large language model company, Anthropic needs vast amounts of high-quality conversational data. Accessing millions of Claude interactions provides the real-world content necessary to improve its competitive positioning against rivals like OpenAI and Google.
Beyond competitive pressures, these changes reflect broader industry shifts in data policies. Companies like Anthropic and OpenAI face increasing scrutiny over their data retention practices. OpenAI is currently fighting a court order that forces it to retain all consumer ChatGPT conversations indefinitely due to a lawsuit filed by The New York Times and other publishers. An OpenAI executive called this a sweeping and unnecessary demand that conflicts with privacy commitments.
What is alarming is the confusion these changing policies create for users, many of whom remain unaware of them. As technology evolves rapidly, privacy policies are bound to change. However, many of these changes are sweeping and mentioned only briefly amid other company news.
Many users do not realize the guidelines they agreed to have changed because the design practically guarantees it. Most ChatGPT users keep clicking delete toggles that are not technically deleting anything. Anthropic’s implementation follows a familiar pattern. New users choose their preference during signup, but existing users face a pop-up with large text announcing updates and a prominent Accept button. A much smaller toggle switch for training permissions is below in smaller print, automatically set to On.
This design raises concerns that users might quickly click Accept without noticing they are agreeing to data sharing. The stakes for user awareness are extremely high. Privacy experts have long warned that the complexity surrounding AI makes meaningful user consent nearly unattainable. The Federal Trade Commission previously warned that AI companies risk enforcement action if they surreptitiously change terms of service or bury disclosures in fine print. Whether the commission still focuses on these practices today remains an open question.