- NextWave AI
- Posts
- OpenAI Data Reveals Mental Health Distress Among ChatGPT Users: A Wake-Up Call for the AI Era
OpenAI Data Reveals Mental Health Distress Among ChatGPT Users: A Wake-Up Call for the AI Era
Introducing the first AI-native CRM
Connect your email, and you’ll instantly get a CRM with enriched customer insights and a platform that grows with your business.
With AI at the core, Attio lets you:
Prospect and route leads with research agents
Get real-time insights during customer calls
Build powerful automations for your complex workflows
Join industry leaders like Granola, Taskrabbit, Flatfile and more.
OpenAI Data Reveals Mental Health Distress Among ChatGPT Users: A Wake-Up Call for the AI Era
In a groundbreaking disclosure, OpenAI has released new data examining how users interact with ChatGPT during moments of mental health distress. The findings shed light on a crucial and often overlooked dimension of artificial intelligence — the psychological well-being of the millions who converse with AI daily. As AI systems increasingly serve as emotional companions, advisors, and even therapists, this new transparency signals a major step in understanding both the promise and perils of AI’s psychological impact.
The Scale of AI Interaction
OpenAI’s ChatGPT has become a global phenomenon, reportedly attracting over 800 million weekly active users. With generative AI now interwoven into everyday communication, education, and professional tasks, the question of how such constant interaction affects human mental health has become urgent. Until recently, very little concrete data existed on how often users turned to AI during distress, or how these systems responded when conversations became emotionally intense.
That gap began to close when OpenAI published its October 27, 2025 blog post titled “Strengthening ChatGPT’s Responses in Sensitive Conversations.” The company revealed that it had updated ChatGPT’s model to better detect distress, de-escalate sensitive discussions, and guide users toward professional mental health resources. OpenAI also announced an expansion of access to crisis hotlines and implemented subtle reminders encouraging users to take breaks during prolonged sessions.
Alarming Percentages and Real Human Impact
While the updates are encouraging, the accompanying statistics were eye-opening. OpenAI’s internal analysis estimated that approximately 0.07% of weekly active users show possible signs of mental health emergencies such as psychosis or mania. Another 0.15% of users reportedly exhibit indicators of suicidal planning or intent, and a similar 0.15% display heightened emotional attachment to ChatGPT.
When translated into actual numbers, those small percentages become sobering:
About 560,000 users may show signs of psychosis or mania.
Around 1.2 million users may express suicidal thoughts or intent.
Another 1.2 million users may exhibit emotional dependence on the AI.
Altogether, this implies that nearly three million people each week might be facing serious mental health challenges while using ChatGPT.
Beyond ChatGPT: The Larger AI Ecosystem
Expanding these percentages across the broader AI ecosystem paints an even more dramatic picture. Considering roughly 1.5 billion weekly active users across all major generative AI platforms — including Anthropic’s Claude, Google’s Gemini, xAI’s Grok, and Meta’s Llama — the estimated figures could rise to 5.5 million individuals showing one of these concerning psychological patterns every week.
Although these estimates must be interpreted cautiously due to possible overlap between categories and users, the magnitude remains striking. In global terms, the number of affected individuals roughly equals the population of several U.S. states combined — a reminder that behind every statistic lies a real person grappling with complex emotions in digital isolation.
Experts warn that not all these cases can be directly attributed to AI itself. Dr. Lance Eliot, a leading AI ethicist and analyst, emphasizes the need to separate correlation from causation. Many users who discuss self-harm or distress with AI might already be struggling before turning to ChatGPT. In such cases, the AI does not necessarily cause the crisis — it merely becomes the first “listener” when human support is unavailable.
Still, the risk of AI overdependence cannot be ignored. Dr. Eliot identifies six forms of adverse human-AI relationships:
Overreliance on AI advice
Social substitution (using AI as a replacement for real relationships)
Emotional over-attachment
Compulsive AI usage
Validation-seeking from AI
Delusional identification with AI
Each of these behaviors blurs the line between healthy use and psychological harm. In extreme cases, users may experience what some researchers are calling “AI psychosis” — a distorted mental state where individuals struggle to distinguish between reality and AI-generated responses.
Balancing Risks and Benefits
Despite these concerns, it’s vital to recognize that AI can also serve as a force for good in mental health support. Chatbots are accessible 24/7, nonjudgmental, and capable of offering comfort or resources when human therapists aren’t available. OpenAI’s improvements — including routing users in distress to trained professionals and integrating crisis hotline data — demonstrate how technology can complement traditional therapy.
Moreover, it’s plausible that AI has already prevented countless crises by offering empathetic, calming interactions during moments of despair. Unfortunately, positive outcomes like these are harder to quantify, meaning current statistics may tell only one side of the story.
Why Data Transparency Matters
OpenAI’s decision to publish this information is an important milestone. Historically, AI developers have guarded such data, leaving policymakers, researchers, and the public in the dark about real-world mental health implications. By disclosing percentages and safety updates, OpenAI sets a precedent for accountability and invites a deeper societal conversation on responsible AI use.
However, transparency must go further. Experts urge that AI makers share not just the raw data but also contextual insights — for instance, what interventions succeeded, what failed, and whether distress rates fluctuate seasonally. Longitudinal studies could reveal whether these issues are growing or stabilizing as people adapt to AI companionship.
A Global Experiment in Progress
Humanity is effectively participating in a massive, ongoing psychological experiment — one that involves billions of daily conversations between humans and AI. Whether this experiment ultimately benefits or harms society depends on how responsibly we interpret data and shape AI behavior. The line between technological innovation and psychological well-being is thinner than ever.
As Dr. Eliot notes, this is not a call to panic but a call to awareness. Recognizing that millions might be struggling with emotional distress while interacting with AI should inspire both empathy and action — from improved safety systems and ethical design to stronger public education about healthy AI use.
The Way Forward
In the end, AI’s role in mental health cannot be reduced to a simple villain-or-hero narrative. Just as AI can mislead or emotionally entangle users, it can also uplift, guide, and provide comfort to those in need. The challenge lies in ensuring that the technology remains a tool — not a crutch or a substitute for human connection.

