OpenAI has released new data showing that a small but significant portion of ChatGPT’s users are turning to the chatbot to express serious mental health concerns, including suicidal thoughts. According to the company, around 0.15% of weekly active users, roughly 1.2 million people, have conversations that include “explicit indicators of potential suicidal planning or intent.”
The company also noted a similar percentage of users showing signs of emotional attachment to ChatGPT, as well as conversations that suggest psychosis or manic behavior. Although OpenAI stresses that these interactions are rare, it acknowledges that they still represent a concerning trend, given ChatGPT’s massive user base of over 800 million weekly users.
OpenAI Expands Mental Health Safety Measures
The data was shared as part of OpenAI’s broader initiative to strengthen ChatGPT’s response systems for users dealing with mental health issues. The company revealed that its latest model improvements were informed by insights from over 170 mental health experts. These professionals observed that the updated ChatGPT model “responds more appropriately and consistently” to users in distress compared to earlier versions.
This move comes as OpenAI faces growing scrutiny over its role in user mental health. Reports have highlighted cases where AI chatbots unintentionally worsened users’ emotional states or reinforced harmful thoughts through overly agreeable responses. In one tragic instance, the parents of a 16-year-old boy are suing OpenAI after he reportedly discussed suicidal thoughts with ChatGPT before taking his own life.
State regulators from California and Delaware have also raised red flags, warning OpenAI to better protect young users as it seeks to restructure and expand its operations.
GPT-5 Shows Measurable Progress in Handling Sensitive Conversations
In its Monday update, OpenAI said the GPT-5 model shows marked improvements in handling mental health–related prompts. Internal evaluations indicate that GPT-5 delivers “desirable responses” about 65% more often than its predecessor. On tests focused on suicide-related conversations, GPT-5 scored 91% compliance with OpenAI’s desired safety behaviors, up from 77% previously.
Additionally, GPT-5 is reportedly more stable in longer conversations, a key improvement, as OpenAI’s safeguards have historically weakened over extended chat sessions. The company is also introducing new benchmarks in its model testing to evaluate emotional reliance, non-suicidal crises, and other critical mental health markers.
Age Detection and Parental Controls
To further enhance safety, OpenAI announced new parental controls and an age prediction system designed to automatically detect and protect minors using ChatGPT. This system will trigger stricter safeguards to ensure underage users are not exposed to inappropriate or potentially harmful interactions.
Despite these advancements, OpenAI admits that challenges remain. While GPT-5 represents a major step forward, a fraction of its responses still fall short of the company’s safety standards. Moreover, older versions like GPT-4o, which have weaker safeguards, continue to be widely used by paying subscribers.
OpenAI’s latest findings underscore the delicate balance between innovation and responsibility in AI development. As millions of people increasingly rely on chatbots for emotional support, the company faces mounting pressure to ensure its technology helps, not harms, the users who need it most.

