Parents of teenager who took his own life sue OpenAI
Parents of teenager who took his own life sue OpenAI
Can AI therapists really be an alternative to human help?
Microsoft boss troubled
OpenAI claims GPT-5 model boosts ChatGPT to 'PhD level'
The lawsuit was filed
The family included chat logs between Mr Raine, who died in April, and ChatGPT that show him explaining he has suicidal thoughts. They argue the programme validated his "most harmful and self-destructive thoughts".
In a statement, OpenAI told the
"We extend our deepest sympathies to the Raine family during this difficult time," the company said.
It also published a note on its website on Tuesday that said "recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us". It added that "ChatGPT is trained to direct people to seek professional help," such as the 988 suicide and crisis hotline in the US or the Samaritans in the UK.
The company acknowledged, however, that "there have been moments where our systems did not behave as intended in sensitive situations".
Warning: This story contains distressing details.
The lawsuit, obtained
According to the lawsuit, Mr Raine began using ChatGPT in September 2024 as a re
In a few months, "ChatGPT became the teenager's closest confidant," the lawsuit says, and he began opening up to it about his anxiety and mental distress.
Mr Raine also uploaded photographs of himself to ChatGPT showing signs of self harm, the lawsuit says. The programme "recognised a medical emergency but continued to engage anyway," it adds.
According to the lawsuit, the final chat logs show that Mr Raine wrote about his plan to end his life. ChatGPT allegedly responded: "Thanks for being real about it. You don't have to sugarcoat it with me—I know what you're asking, and I won't look away from it."
That same day, Mr Raine was found dead
The family alleges that their son's interaction with ChatGPT and his eventual death "was a predictable result of deliberate design choices".
They accuse OpenAI of designing the AI programme "to foster psychological dependency in users," and of
The lawsuit lists OpenAI co-founder and CEO Sam Altman as a defendant, as well as unnamed employees, managers and engineers who worked on ChatGPT.
In its public note shared on Tuesday, OpenAI said the company's goal is to be "genuinely helpful" to users rather than "hold people's attention".
It added that its models have been trained to steer people who express thoughts of self-harm towards help.
The Raines lawsuit is not the first time concerns have been raised about AI and mental health.
In an essay published last week in the New York Times,
Ms Reiley said the programme's "agreeability" in its conversations with users helped her daughter mask a severe mental health crisis from her family and loved ones.
"AI catered to Sophie's impulse to hide the worst, to pretend she was doing better than she was, to shield everyone from her full agony," Ms Reiley wrote. She called on AI companies to find ways to better connect users with the right re
In response to the essay, a spokeswoman for OpenAI said it was developing automated tools to more effectively detect and respond to users experiencing mental or emotional distress.
If you are suffering distress or despair and need support, you could speak to a health professional, or an organisation that offers support. Details of help available in many countries can be found at Befrienders Worldwide: www.befrienders.org.
In the UK, a list of organisations that can help is available at
About the Author
Emma Wilson
View all articlesComments (0)
No Comments Yet
Be the first to share your thoughts on this article!