• Over 1,000,000 ChatGPT users per week send messages with “clear signs of suicidal planning or intent,” according to OpenAI’s latest report.
  • Approximately 0.07% of weekly active users (equivalent to 560,000 out of 800,000,000 people) show “possible signs of mental health crisis related to psychosis or mania.”
  • This finding is released as OpenAI faces a lawsuit from the family of a teenager who died after a long chat with ChatGPT, and is also under investigation by the Federal Trade Commission (FTC) regarding the chatbot’s negative impact on children and adolescents.
  • OpenAI states that the GPT-5 update has significantly reduced unwanted behaviors, with 91% of responses adhering to safety standards, compared to 77% in the previous GPT-4 version.
  • Over 1,800 model responses were evaluated by 170 psychiatrists and psychologists in the Global Physician Network to ensure appropriate handling in sensitive mental health situations.
  • OpenAI is expanding access to crisis hotlines, adding rest reminders for users in long conversations, and improving automated assessment processes for over 1,000 hypothetical self-harm scenarios.
  • CEO Sam Altman affirms that the company has “significantly reduced severe mental health risks,” and is soon easing restrictions to allow adults to generate suggestive content.

📌 OpenAI’s report exposes an alarming scale: over 1 million people per week express suicidal intent when using ChatGPT, along with 560,000 potential cases of mental disorders. Although GPT-5 achieved a 91% safety standard, public pressure and legal investigations are forcing OpenAI to prove that generative AI can support, rather than exacerbate, the global mental health crisis.

Share.
VIET NAM CONSULTING AND MEASUREMENT JOINT STOCK COMPANY
Contact

Email: info@vietmetric.vn
Address: No. 34, Alley 91, Tran Duy Hung Street, Yen Hoa Ward, Hanoi City

© 2026 Vietmetric
Exit mobile version