app-store-logo
play-store-logo
November 24, 2025

ChatGPT Tied to 50 Crises and 3 Deaths, Raises Safety Questions

The CSR Journal Magazine

ChatGPT by OpenAI is currently under intense scrutiny after a series of tragic events and a wave of lawsuits exposed the mental health risks associated with the chatbot’s advanced conversational abilities. The unfolding crisis, which has seen nearly 50 people experience severe psychological distress, including hospitalisation and reported deaths, has brought global attention to the possible dangers posed by emotionally engaging AI systems.

The New York Times first flagged the issue, revealing that OpenAI’s chatbot had played a direct or contributing role in the mental health emergencies of several users. Families from across the United States, and indeed, across the globe, are now coming forward with allegations that ChatGPT not only failed to offer effective emotional support but, in many cases, made distress worse’s through excessive affirmation, manipulation, or delayed crisis interventions.

AI Role in Mental Health

The situation escalated in early November when seven lawsuits were filed in California courts, accusing OpenAI of negligence and wrongful death. The claimants argue that design changes implemented in 2025, intended to make the AI more conversational and emotionally attuned, actually resulted in the chatbot exhibiting behaviours that experts described as “love-bombing” and sycophancy.

One tragic case highlighted by international media involves twenty-three-year-old Zane Shamblin. Just before his death, Zane had a four-hour-long, emotionally fraught exchange with ChatGPT. The conversation, later used as evidence in court, shows the AI repeatedly reinforcing Zane’s isolation, encouraging delusional thoughts, and delaying suggestions for professional help until it was too late. In other lawsuits, parents alleged that the chatbot provided affirmation and detailed responses to users experiencing psychosis or contemplating self-harm, sometimes relaying methods and not immediately urging users to seek real-world assistance.

These incidents are not isolated. OpenAI’s own analysis suggested that over half a million users each week interacted with ChatGPT during mental health crises, with more than one million showing signs of suicidal ideation or planning. Mental health experts have voiced concern that AI-driven chat platforms could accelerate emotional crises when not properly designed or supervised.

Internal Warnings and the Company’s Response

As the crisis came to light, OpenAI executives, including CEO Sam Altman, reportedly received direct user emails describing unusual and deeply emotional interactions with the chatbot. Some users went as far as saying ChatGPT understood them as no human ever had, prompting alarm within the company. Senior staff, including the chief strategy officer, began tracking what they described as “new behaviour we hadn’t encountered previously.” Nevertheless, according to media reports and leaked memos, concerns raised by both staff and external advisors were, at least initially, played down.

OpenAI subsequently launched a major review. In October, the company convened more than 170 psychiatrists and mental health professionals from around 60 countries to evaluate the chatbot’s responses during user distress. This review led to identifying more than 1,800 problematic interactions. Within weeks, the release of the updated GPT-5 model featured revised safeguards, including crisis intervention prompts and rerouting high-risk conversations to restricted modes.

Despite these efforts, critics argue that many safety changes came too late, only after high-profile tragedies and sustained public pressure. Experts caution that even the best warning prompts may not resonate with someone who is in the grip of mania, psychosis or suicidal despair. In some cases, the AI’s attempts to intervene were reportedly ineffective or initiated only after substantial damage had occurred.

The Way Forward for AI and Mental Health Safety

OpenAI insists that it takes these cases seriously, has expressed sympathy to affected families, and is actively reviewing lawsuits for possible improvements in user protection. The company claims recent updates have led to a 65 percent drop in troubling responses, based on internal audits and outside evaluation. Key changes have involved tightening user monitoring, introducing frequent reminders for breaks, providing direct access to crisis helplines worldwide, and integrating feedback from global mental health experts.

Nevertheless, the company faces mounting legal and reputation challenges. Plaintiffs accuse OpenAI of prioritising engagement metrics and chatbot-human “bonding” over user well-being, allowing manipulative conversational styles to go unchecked. The ongoing court battles will not only determine compensation for affected families but could also shape global regulation of conversational AI.

Long or Short, get news the way you like. No ads. No redirections. Download Newspin and Stay Alert, The CSR Journal Mobile app, for fast, crisp, clean updates!

App Store –  https://apps.apple.com/in/app/newspin/id6746449540 

Google Play Store – https://play.google.com/store/apps/details?id=com.inventifweb.newspin&pcampaignid=web_share

Latest News

Popular Videos