Anthropic users face a new choice – opt out or share your data for AI training
Topics
More from TechCrunch
Anthropic users face a new choice – opt out or share your data for AI training
Tech and VC heavyweights join the Disrupt 2025 agenda
Netflix, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just a few of the heavy hitters joining the Disrupt 2025 agenda. They’re here to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $600+ before prices rise.
Tech and VC heavyweights join the Disrupt 2025 agenda
Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They’re here to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise.
Most Popular
Google Translate takes on Duolingo with new language learning tools
Google Gemini’s AI
Robomart unveils new delivery robot with $3 flat fee to challenge DoorDash, Uber Eats
Coinbase CEO explains why he fired engineers who didn’t try AI immediately
OpenAI lawyers question Meta’s role in Elon Musk’s $97B takeover bid
YouTube Music celebrates 10 years with new features that help it compete with Spotify
Harvard dropouts to launch ‘always on’ AI smart glasses that listen and record every conversation
Latest
AI
Amazon
Apps
Biotech & Health
Climate
Cloud Computing
Commerce
Crypto
Enterprise
EVs
Fintech
Fundraising
Gadgets
Gaming
Government & Policy
Hardware
Layoffs
Media & Entertainment
Meta
Microsoft
Privacy
Robotics
Security
Social
Space
Startups
TikTok
Transportation
Venture
Events
Startup Battlefield
StrictlyVC
Newsletters
Podcasts
Videos
Partner Content
TechCrunch Brand Studio
Crunchboard
Contact Us
Anthropic users face a new choice – opt out or share your data for AI training Connie Loizos PM PDT · August 28, 2025 Anthropic is making some big changes to how it handles user data, requiring all Claude users to decide
But first, what’s changing: previously, Anthropic didn’t use consumer chat data for model training. Now, the company wants to train its AI systems on user conversations and coding sessions, and it said it’s extending data retention to five years for those who don’t opt out.
That is a massive update. Previously, users of Anthropic’s consumer products were told that their prompts and conversation outputs would be automatically deleted from Anthropic’s back end within 30 days “unless legally or policy‑required to keep them longer” or their input was flagged as violating its policies, in which case a user’s inputs and outputs might be retained for up to two years.
So why is this happening? In that post about the update, Anthropic frames the changes around user choice, saying that
In short, help us help you. But the full truth is probably a little less selfless.
Like every other large language model company, Anthropic needs data more than it needs people to have fuzzy feelings about its brand. Training AI models requires vast amounts of high-quality conversational data, and accessing millions of Claude interactions should provide exactly the kind of real-world content that can improve Anthropic’s competitive positioning against rivals like OpenAI and Google.
Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just a few of the heavy hitters joining the Disrupt 2025 agenda. They’re here to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $600+ before prices rise. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They’re here to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | October 27-29, 2025 REGISTER NOW Beyond the competitive pressures of AI development, the changes would also seem to reflect broader industry shifts in data policies, as companies like Anthropic and OpenAI face increasing scrutiny over their data retention practices. OpenAI, for instance, is currently fighting a court order that forces the company to retain all consumer ChatGPT conversations indefinitely, including deleted chats, because of a lawsuit filed
In June, OpenAI COO Brad Lightcap called this “a sweeping and unnecessary demand” that “fundamentally conflicts with the privacy commitments we have made to our users.” The court order affects ChatGPT Free, Plus, Pro, and Team users, though enterprise customers and those with Zero Data Retention agreements are still protected.
What’s alarming is how much confusion all of these changing usage policies are creating for users, many of whom remain oblivious to them.
In fairness, everything is moving quickly now, so as the tech changes, privacy policies are bound to change. But many of these changes are fairly sweeping and mentioned only fleetingly amid the companies’ other news. (You wouldn’t think Tuesday’s policy changes for Anthropic users were very big news based on where the company placed this update on its press page.)
But many users don’t realize the guidelines to which they’ve agreed have changed because the design practically guarantees it. Most ChatGPT users keep clicking on “delete” toggles that aren’t technically deleting anything. Meanwhile, Anthropic’s implementation of its new policy follows a familiar pattern.
How so? New users will choose their preference during signup, but existing users face a pop-up with “Updates to Consumer Terms and Policies” in large text and a prominent black “Accept” button with a much tinier toggle switch for training permissions below in smaller print – and automatically set to “On.”As observed earlier today
Meanwhile, the stakes for user awareness couldn’t be higher. Privacy experts have long warned that the complexity surrounding AI makes meaningful user consent nearly unattainable. Under the Biden Administration, the Federal Trade Commission even stepped in, warning that AI companies risk enforcement action if they engage in “surreptitiously changing its terms of service or privacy policy, or burying a disclosure behind hyperlinks, in legalese, or in fine print.”
Whether the commission — now operating with just three of its five commissioners — still has its eye on these practices today is an open question, one we’ve put directly to the FTC.
Topics
Connie Loizos Editor in Chief & General Manager
October 27-29, 2025 San Francisco Put your brand in front of 10,000+ tech and VC leaders across all three days of Disrupt 2025. Amplify your reach, spark real connections, and lead the innovation charge. Secure your exhibit space before your competitor does.
Most Popular Google Translate takes on Duolingo with new language learning tools Aisha Malik
Google Gemini’s AI Maxwell Zeff
Robomart unveils new delivery robot with $3 flat fee to challenge DoorDash, Uber Eats Rebecca Szkutak
Coinbase CEO explains why he fired engineers who didn’t try AI immediately Julie Bort
OpenAI lawyers question Meta’s role in Elon Musk’s $97B takeover bid Maxwell Zeff
YouTube Music celebrates 10 years with new features that help it compete with Spotify Sarah Perez
Harvard dropouts to launch ‘always on’ AI smart glasses that listen and record every conversation Lorenzo Franceschi-Bicchierai Rebecca Bellan
X LinkedIn Facebook Instagram youTube Mastodon Threads Bluesky TechCrunchStaffContact UsAdvertiseCrunchboard JobsSite Map Terms of ServicePrivacy PolicyRSS Terms of UseCode of Conduct IntelDOGELibbySpotifyApple EventTech LayoffsChatGPT © 2025 TechCrunch Media LLC.
About the Author
Sophie Mueller
View all articlesComments (0)
No Comments Yet
Be the first to share your thoughts on this article!