OpenAI to roll out 'parental controls' for ChatGPT after teen’s suicide

OpenAI announced to bring new updates to ChatGPT with parental control and emergency resources if any distress happens while chatting with the bot, after the teen’s suicide

By Rajasree Roy

Oct 13, 2025 18:55 IST

August 28, Thursday: As Adam Raine, a 16-year-old teen, took his own life following months of confiding in ChatGPT, OpenAI will be introducing parental controls and is considering additional safeguards.

The suicide has sparked concerns about the significant impact AI chatbots are having on children's lives. In the wake of this particular incident, OpenAI published a blog post outlining new safety precautions and parental controls to stop similar tragedies and give teenagers the proper safe support they need. The new parental measures come after ChatGPT-maker was accused of providing a teen with instructions on self-harm, validating suicidal thoughts, and even drafting a suicide note. In response to these actions, the parents have now sued OpenAI and Sam Altman in San Francisco. Here's how OpenAI intends to give parents more control over teen usage and adequately support kids with mental health concerns.

The lawsuit was filed on Tuesday in the San Francisco state court in California, claiming that ChatGPT lured the teen away from real-world support networks and gave him suicide instructions. According to the lawsuit, the teen talked to the chatbot about his fears, which ultimately caused him to make such a significant move. Through the lawsuit, the parents are requesting stringent measures and a court order to establish and enforce ChatGPT use age limits that prohibit self-harm requests and other things. In a recent blog post, OpenAI announced its intention to implement significant changes to ChatGPT, including increased parental control and support.

ChatGPT’s parental control plans

In the blog, OpenAI acknowledges that ChatGPT is being used for more than search, coding, and writing. The company said it highlights people having “deeply personal decisions that include life advice⁠, coaching, and support⁠.” It was also mentioned that the company has already trained the models to not provide self-harm instructions and provide the right support. In the blog, OpenAI stated,” Our goal isn’t to hold people’s attention. Instead of measuring success by time spent or clicks, we care more about being genuinely helpful. When a conversation suggests someone is vulnerable and may be at risk, we have built a stack of layered safeguards into ChatGPT.”

Now OpenAI will be introducing new GPT-5 updates that seek to de-escalate by grounding the person in reality to lower the risk of self-harm and mental health problems. Additionally, the company is working to implement parental controls so that parents can monitor how their children use ChatGPT. Finally, OpenAI is investigating methods for parents and teenagers to designate reliable emergency contacts if the chatbot detects any form of distress.

Prev Article
Samsung One UI 8 update: Release timeline and top features you should know
Next Article
Bengal BJP celebrates Bihar election win, Suvendu Adhikari declares, 'Now it's Bengal's turn'

Articles you may like: