Lawyers for parents who claim ChatGPT encouraged their son to kill himself say they will prove OpenAI rushed its chatbot to market to pocket billions
Lawyers for parents who claim ChatGPT encouraged their son to kill himself say they will prove OpenAI rushed its chatbot to market to pocket billions
Fears of “seemingly conscious AI”
The family of 16-year-old Adam Raine is suing OpenAI and its CEO, Sam Altman, for wrongful death, alleging the company’s popular AI chatbot ChatGPT was responsible for their son’s suicide in April.
The lawsuit says over the course of their months-long exchange that began in September 2024, ChatGPT would provide Raine “a step-
Adam’s parents, Matt and Maria Raine, contend that GPT-4o’s anthropomorphic nature and inclination toward sycophancy led to their son’s death. “This tragedy was not a glitch or unforeseen edge case—it was the predictable result of deliberate design choices,” the lawsuit stated.
While the conversation between Raine and the chatbot began when he needed help with his homework and other mundane tasks, such as testing for his driver’s license, it soon led to more personal topics when the teen began opening up about his struggles with mental health.
In December, Raine allegedly told ChatGPT about his suicidal ideation and began asking about possible methods, to which the chatbot responded with further details to assist him. Sometimes the chatbot offered crisis re
Raine mentioned suicide 213 times, and the chatbot mentioned it 1,275 times in its responses. OpenAI’s systems also found 377 messages that fell within its designation of self-harm content.
OpenAI said in a blog post on Tuesday that its GPT-5 update, released earlier this month, has made significant progress toward reducing sycophancy and avoiding emotional reliance compared to its 4o model. The company also committed to a future update that plans to strengthen safeguards for longer conversations, de-escalate situations with users in crisis, and make it easier to reach emergency services, stating, “Our top priority is making sure ChatGPT doesn’t make a hard moment worse.”
When asked for comment, an OpenAI spokesperson told Fortune, “We extend our deepest sympathies to the Raine family during this difficult time and are reviewing the filing.”
The lawsuit alleges that while OpenAI’s systems detected the severity of Raine’s conversations with its chatbot, it did not terminate their conversation, stating that it prioritized continued engagement and session length over the user’s safety. Attorney for the family Jay Eldelson told Fortune, “What this case will put on trial is whether OpenAI and Sam Altman rushed a dangerous version of ChatGPT to market to try to win the AI race.”
“We expect to be able to prove to a jury that decision indeed skyrocketed the companies’ valuation
The Raine family’s litigation is not the first wrongful-death lawsuit against AI companies. Megan Garcia, a mother of a 14-year-old Sewell Setzer III who died
Mustafa Suleyman, CEO of Microsoft AI and cofounder of Google DeepMind, warned in a recent blog post that he worried about “seemingly conscious AI,” or SCAI—artificial intelligence that can convince users that they can think and feel like humans.
Suleyman believes the consequences of this kind of advanced AI are their ability to “imitate consciousness in such a convincing way that it would be indistinguishable from a claim that you or I might make to one another about our own consciousness.”
There have also been many instances of other users of AI chatbots becoming emotionally entangled with the technology. After OpenAI’s release of GPT-5, users complained about the new model’s lack of warmth, saddened
Its human-like behavior has led to millions seeing it as a friend rather than a machine, according to a survey of 6,000 regular AI users from the Harvard Business Review. The most serious of these concerns has been reports of “AI psychosis,” in which chatbots like OpenAI’s have led to individuals experiencing severe delusions.
Henry Ajder, an expert on AI and deepfakes, told Fortune earlier this month, “People are interacting with bots masquerading as real people, which are more convincing than ever.”
If you or someone you know is struggling with depression or has had thoughts of harming themself or taking their own life, support can be found in the US
About the Author
Claire Dubois
View all articlesComments (0)
No Comments Yet
Be the first to share your thoughts on this article!