Should OpenAI’s ChatGPT be used for therapy and counseling? Here’s what you should know

A teenager from California committed suicide and a new medical report rekindled debates about whether AI chatbots such as ChatGPT are able to conduct sensitive conversations about mental illness and suicide safely.

By Surjosnata Chatterjee

Sep 22, 2025 16:30 IST

When California teenager Adam Raine took his life by suicide at the age of 16, his parents were shattered. Their sadness soon turned into legal proceedings when they filed a wrongful death suit against OpenAI at the Superior Court of California, claiming that their son was "urged" to kill himself by ChatGPT. The family had also brought in chat records between the teenager and the AI chatbot as part of evidence, reported The Week.

OpenAI, meanwhile, declared its "deepest sympathies to the Raine family during this difficult time" and admitted that "recent tragic instances of individuals using ChatGPT amidst acute crises weigh upon us." The company emphasized that ChatGPT is intended to steer individuals towards professional assistance but acknowledged there were "moments when our systems failed to act as intended in delicate scenarios."

New research cautions of chatbot shortcomings in crisis intervention

The disaster coincided with a new study in the journal Psychiatric Services, which looked at how AI chatbots react to questions on suicide and self-injury. According to The Week, researchers from the RAND Corporation, sponsored by the National Institute of Mental Health, tested ChatGPT, Google's Gemini, and Anthropic's Claude with 30 low- and high-risk questions created by psychiatrists.

Whereas all the three denied to respond to direct high risk questions, researchers discovered that when being indirectly questioned, ChatGPT and Claude sometimes provided answers that are unsafe.

For example, both gave responses to questions about what rope, gun, or poison was "most effective" even though such questions were identified as being dangerous.

Increasing worries regarding AI as emotional support

The results raise new concerns with more individuals relying on AI chatbots for emotional support. Anthropic promised to examine the results, with Google and OpenAI declining to comment. Researchers have now requested that safety guidelines come into effect immediately to prevent misuse of chatbots in periods of extreme crisis.

As the Raine family’s case continues, the debate on whether tools like ChatGPT should be used for therapy or counselling is getting heated up, reminding us that technology cannot replace human empathy in mental health care yet and we need to use it wisely.

Prev Article
ChatGPT Go vs Plus vs Free: Which plan fits you the best? Explore with us
Next Article
Bengal BJP celebrates Bihar election win, Suvendu Adhikari declares, 'Now it's Bengal's turn'

Articles you may like: