Anthropic is updating its policy regarding the use of user conversations with its Claude chatbot for model training. Starting October 8, users will need to opt out if they don’t want their chat data to be included in future training for Anthropic’s language models. Historically, the company did not train its AI models on user interactions, making this a significant shift.
The rationale behind this change is outlined in Anthropic’s blog, which states that utilizing data from real-world interactions can enhance the accuracy and usefulness of their AI systems. Collecting more user data is seen as a way to refine and improve the chatbot, Claude, over time.
Initially scheduled for September 28, the rollout was delayed to give users additional time to understand these new terms. Gabby Curtis, a spokesperson for Anthropic, mentioned this adjustment in an email.
How to Opt Out
New users will encounter a question about their data privacy settings during the sign-up process. Existing users might see a pop-up notification regarding the changes to the terms. The default setting allows Anthropic to use chat data for training, meaning users who accept the updates without changing that setting will be opted in automatically.
To opt-out, users can go to the Privacy Settings and switch off the toggle under “Help improve Claude.” If not opted out, all current and future conversations, including any revisited chats, will fall under the new training policy. Anthropic will also retain user data for five years instead of the previous 30 days, regardless of whether users agree to have their data used for training.
Impact on Commands
It’s crucial to note that the policy change affects all users on the commercial tier, covering both free and paid accounts. However, users accessing Claude through government or educational channels are exempt from this data use.
As Claude is popular among developers for its coding capabilities, the data from coding interactions, in addition to chat logs, will now potentially contribute to training improvements. Unlike Claude, other AI platforms like OpenAI’s ChatGPT and Google’s Gemini automatically incorporate user conversations for training unless opted out.
For users concerned about their privacy, it’s advisable to check the following AI training opt-out guide and be mindful that any interactions posted publicly might still be utilized for training purposes.