Start your day with intelligence. Get The OODA Daily Pulse.

Home > Briefs > Technology > Anthropic users face a new choice – opt out or share your chats for AI training

Anthropic users face a new choice – opt out or share your chats for AI training

Anthropic is making some big changes to how it handles user data, requiring all Claude users to decide by September 28 whether they want their conversations used to train AI models. While the company directed us to its blog post on the policy changes when asked about what prompted the move, we’ve formed some theories of our own. But first, what’s changing: Previously, Anthropic didn’t use consumer chat data for model training. Now, the company wants to train its AI systems on user conversations and coding sessions, and it said it’s extending data retention to five years for those who don’t opt out. That is a massive update. Previously, users of Anthropic’s consumer products were told that their prompts and conversation outputs would be automatically deleted from Anthropic’s back end within 30 days “unless legally or policy‑required to keep them longer” or their input was flagged as violating its policies, in which case a user’s inputs and outputs might be retained for up to two years.

Full report : Anthropic will start training its AI on your chats unless you opt out.