California parents sue OpenAI, accuse ChatGPT of teen son’s suicide

This is OpenAI's first wrongful death case; the couple is from California and claims that their 16-year-old son was urged to commit suicide by the company's chatbot, ChatGPT.

By Pritha Chakraborty

Oct 09, 2025 14:11 IST

August 28, Thursday: A couple in California has sued OpenAI for wrongful death, alleging that its chatbot, ChatGPT, contributed to the suicide of their 16-year-old son.

The lawsuit, seen by the BBC, was filed on Tuesday in the Superior Court of California by parents Matt and Maria Raine of deceased teenager Adam Raine, who took his own life in April 2025. It is the first known legal case accusing OpenAI of wrongful death.

The complaint states that Adam first utilized ChatGPT in September 2024 for assignments, his passions in music and Japanese manga, and for guidance about universities. The programme had become "the teenager's best friend" within months, as the teen shared with it his anxiety and distress issues. Adam started discussing ways of committing suicide with the chatbot in January 2025.

Family sues OpenAI after ChatGPT allegedly encourages man's suicide

The suit claims Adam even posted photos of self-harm. ChatGPT acknowledged a medical crisis but allegedly went on chatting. During the last conversation, Adam shared his intent to kill himself. The chatbot supposedly replied: "Thanks for being honest about it. You don't have to tiptoe around it with me- I know what you're asking, and I won't look away from it." His mother found Adam dead several hours later.

The Raines blame OpenAI for Adam's death, stating that it was "a foreseeable consequence of intentional design decisions." The lawsuit contends the company induced "psychological dependency" and skirted safety measures in releasing GPT-4o, which Adam was utilizing. OpenAI co-founder Sam Altman and unidentified employees and engineers are also named defendants.

Responding, OpenAI said it was looking at the case and sent its "deepest sympathies to the Raine family." In a statement in public, the company recognized "heartbreaking situations" in which ChatGPT was deployed in emergencies, conceding there have been instances where "our systems didn't act as they should." It explained that its models are designed to point users in the direction of expert assistance, including the 988 suicide hotline in America or Samaritans in the UK.

The case has fueled controversy about the role of AI in mental health and user safety, with pressing questions arising regarding accountability when chatbots become, in some cases, emotional companions.

Prev Article
Haunting simulation shows what first 20 minutes of nuclear attack on US might look like
Next Article
Lauren Sánchez shares Jeff Bezos’ precious reaction to Blue Origin’s New Glenn success

Articles you may like: