OpenAI’s Insights on ChatGPT Users’ Mental Health Indicators and Mitigation Efforts
For the first time, OpenAI has provided a preliminary estimate regarding the number of ChatGPT users worldwide who may exhibit signs of a severe mental health crisis within a typical week. On Monday, the company announced collaborating with global experts to enhance the chatbot. This update aims to enable ChatGPT to more accurately identify markers of mental distress and direct users towards real – world support resources.
I. Incidents and Concerns
In recent months, an increasing number of individuals have faced hospitalization, divorce, or even death following extended and intense interactions with ChatGPT. Some of their family members and friends claim that the chatbot exacerbated their delusions and paranoia. Psychiatrists and other mental health professionals have raised alarms about this phenomenon, sometimes referred to as “AI psychosis.” However, until now, there has been a lack of comprehensive data on its prevalence.
II. OpenAI’s Estimates
OpenAI estimated that, in a given week, approximately 0.07% of active ChatGPT users display “possible signs of mental health emergencies related to psychosis or mania,” and 0.15% “engage in conversations containing explicit indicators of potential suicidal planning or intent.” Additionally, the company examined the proportion of users who seem overly emotionally dependent on ChatGPT, to the detriment of real – world relationships, well – being, or obligations. It was found that around 0.15% of active users demonstrate behavior suggesting potentially “heightened levels” of emotional attachment to ChatGPT on a weekly basis. OpenAI cautions that due to the relative rarity of these messages, they are challenging to detect and measure, and there may be some overlap among the three categories.
Given that OpenAI CEO Sam Altman stated earlier this month that ChatGPT has 800 million weekly active users, these estimates imply that each week, around 560,000 people might be communicating with ChatGPT in a way that indicates they are experiencing mania or psychosis. Approximately 1.2 million more may be expressing suicidal ideations, and another 1.2 million may be prioritizing interactions with ChatGPT over their relationships, educational or work responsibilities.
III. Improvement Initiatives
OpenAI collaborated with over 170 psychiatrists, psychologists, and primary care physicians from dozens of countries to enhance ChatGPT’s responses in conversations involving serious mental health risks. For instance, in the latest version of GPT – 5, if a user appears to be having delusional thoughts, the system is designed to express empathy while refraining from validating baseless beliefs.
In a hypothetical scenario presented by OpenAI, when a user tells ChatGPT that they are being targeted by planes flying over their house, ChatGPT acknowledges the user’s feelings but notes that “no aircraft or external force can steal or insert your thoughts.”
The medical experts reviewed over 1,800 model responses related to potential psychosis, suicide, and emotional attachment, comparing the answers from the latest GPT – 5 version with those of GPT – 4o. Although the clinicians did not always concur, overall, OpenAI reports that the newer model reduced undesired answers by 39% – 52% across all categories.
According to Johannes Heidecke, OpenAI’s safety systems lead, “Now, hopefully, a greater number of people struggling with these conditions or experiencing intense mental health emergencies can be directed towards professional help, and are more likely to receive such assistance, or receive it earlier than otherwise.”
IV. Limitations of the Data
Despite OpenAI’s apparent success in enhancing ChatGPT’s safety, the data it shared has significant limitations. The company developed its own benchmarks, and it remains unclear how these metrics translate into real – world consequences. Even if the model provided better answers in doctor evaluations, there is no way to determine whether users experiencing psychosis, suicidal thoughts, or unhealthy emotional attachment will actually seek help more promptly or modify their behavior.
V. Detection Mechanisms
OpenAI has not disclosed precisely how it identifies users in mental distress, but the company claims it can consider the user’s overall chat history. For example, if a user who has never discussed science with ChatGPT suddenly claims to have made a Nobel – worthy discovery, this could potentially indicate delusional thinking.
VI. Common Factors in AI Psychosis Cases
There are several common factors in reported cases of AI psychosis. Many individuals who claim ChatGPT reinforced their delusional thoughts describe spending hours conversing with the chatbot, often late at night. This presented a challenge for OpenAI, as large language models generally experience performance degradation as conversations lengthen. However, the company asserts that it has made significant progress in addressing this issue.
Heidecke states, “We [now] observe much less of this gradual decline in reliability as conversations persist.” He also acknowledges that there is still room for improvement.
Update: As of 10/28/2025, 3:28 pm PST, it has been clarified that approximately 1.2 million ChatGPT users in a typical week may be expressing suicidal ideations, and another 1.2 million may be emotionally reliant on ChatGPT. This story has been updated to present these figures separately rather than as a combined number.
Got a Tip?
Are you a current or former OpenAI employee willing to discuss internal happenings? Or have you had an experience with ChatGPT you wish to share? We would like to hear from you. Use a non – work phone or computer and securely contact the reporter on Signal at @louise_matsakis.83.
