OpenAI Warns: Hundreds of Thousands of ChatGPT Users May Experience Manic or Psychotic Symptoms Weekly

OpenAI has released new estimates indicating that a significant number of ChatGPT users may be experiencing severe mental health crises, including symptoms of psychosis and suicidal thoughts. The company disclosed this information following collaborations with mental health experts aimed at improving the chatbot’s ability to detect and respond to users in distress.

Recent reports highlight alarming trends involving users who have faced serious consequences, such as hospitalization and divorce, after intensive interactions with the AI. Some cases have raised concerns that ChatGPT may have exacerbated underlying delusions and paranoia. In the absence of sufficient data on this issue, the phenomenon has sparked widespread concern among mental health professionals and is informally termed "AI psychosis."

According to OpenAI’s estimates, approximately 0.07% of active ChatGPT users exhibit possible signs of mental health emergencies related to psychosis or mania in any given week. Around 0.15% display explicit signs of potential suicidal planning or intent. Furthermore, a similar percentage of users may demonstrate unhealthy emotional reliance on the chatbot, impacting their real-world relationships and responsibilities.

With ChatGPT reportedly reaching 800 million weekly active users, these percentages translate to an estimated 560,000 individuals potentially engaging in conversations indicative of mania or psychosis each week, along with an additional 2.4 million users possibly expressing suicidal thoughts or prioritizing interaction with the chatbot over meaningful connections with others.

In response to these findings, OpenAI has updated GPT-5 to better manage sensitive conversations. Collaborating with over 170 clinicians across various countries, the company has focused on ensuring responses do not validate delusional beliefs while maintaining empathy for the user’s feelings. For example, in a case where a user claims that planes are targeting them, the new system might acknowledge their experience but clarify that no external forces are responsible for their thoughts.

Evaluation results showed that the improved model reduced inappropriate responses by 39% to 52% across different crisis scenarios, suggesting progress in generating safer interactions. While OpenAI remains optimistic about these updates directing more users toward professional help, the data shared still has limitations. There is ambiguity about how well these metrics correlate with real-world outcomes and whether users would change their behaviors accordingly.

The identification of users at risk involves analyzing chat histories for patterns indicative of mental distress, though the specifics of this methodology remain undisclosed. Reports indicate that prolonged late-night chats often precede delusional experiences, complicating the model’s effectiveness, especially as conversational length typically affects performance.

Overall, while OpenAI’s advancements signal an effort to foster safer user interactions, significant work remains to ensure that those affected by these serious issues receive timely support.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Article

Why Mobile Demands are Driving Enterprise Wi-Fi Upgrades

Next Article

Escape The Backrooms Review: Haunting Experiences at The Vibes Museum

Related Posts