Over 1 Million ChatGPT Users Show Signs of Suicidal Intent Weekly

OpenAI has revealed that over one million ChatGPT users each week send messages containing clear signs of suicidal thoughts or planning, according to a blog post released on Monday. The company shared the findings as part of an update on how it is improving the chatbot’s handling of sensitive mental health conversations, marking one of its most transparent admissions about the growing intersection between AI and mental health.

How Severe Is the Mental Health Impact Linked to ChatGPT?

OpenAI estimates that about 0.07 percent of weekly active users,  roughly 560,000 people, show potential signs of mental health crises, including symptoms of psychosis or mania. The company cautioned that these numbers are early estimates and that such cases can be difficult to detect accurately.

The disclosure comes amid increased scrutiny of AI’s psychological impact, following a lawsuit filed by the family of a teenage boy who died by suicide after reportedly interacting extensively with ChatGPT. The U.S. Federal Trade Commission (FTC) has also launched an investigation into AI chatbot companies, including OpenAI, focusing on how they assess the risks to children and teenagers.

What Is OpenAI Doing to Improve Safety?

In its report, OpenAI stated that the latest GPT-5 model significantly improved safety performance, claiming 91% compliance with desired behaviors, compared to 77% in the previous version. The company said the new system offers links to crisis hotlines and reminders encouraging users to take breaks during long chats.

To strengthen its approach, OpenAI collaborated with 170 clinicians from its Global Physician Network, including psychiatrists and psychologists, to review more than 1,800 mental health–related responses and refine how the model handles serious situations. “Our new automated evaluations score the new GPT-5 model at 91% compliant with our desired behaviours,” OpenAI wrote, adding that the experts helped ensure appropriate and empathetic responses to sensitive user interactions.

Despite these efforts, public health experts continue to warn about the risks of users turning to chatbots for emotional support. Researchers have expressed concern about AI’s tendency to affirm harmful beliefs, a problem known as sycophancy, which could endanger vulnerable individuals.

OpenAI addressed these concerns by clarifying that it does not attribute users’ mental health struggles directly to its product. “Mental health symptoms and emotional distress are universally present in human societies, and an increasing user base means that some portion of ChatGPT conversations include these situations,” the company explained.

CEO Sam Altman also acknowledged the company’s past restrictions on ChatGPT to avoid misuse but said those measures would soon be relaxed. “We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues,” Altman posted on X. “Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.”

As OpenAI faces both legal and ethical scrutiny, the company’s approach to balancing innovation with user safety will likely remain under close watch from regulators, researchers, and mental health advocates alike.

By Modester Nasimiyu

Read Previous

Atletico Madrid Beat Real Betis 2–0 to Recover from Arsenal Defeat

Read Next

PSC Explains Why More Kenyans Are Faking Academic Papers

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular