🔔 Stay Updated!

Get instant alerts on breaking news, top stories, and updates from News EiSamay.

Think your AI chats are private? Here's what a new US ruling means for users

A recent US court ruling has raised concerns over the privacy of AI chatbot conversations, with lawyers warning they may not be protected under attorney-client privilege.

By Trisha Katyayan

Apr 16, 2026 13:24 IST

Conversations with AI chatbots like ChatGPT and Claude may not be as private as users assume. US lawyers are increasingly cautioning clients that such interactions are not protected under attorney-client privilege and could be accessed in court proceedings.

The concern has gained urgency following a recent ruling in New York, prompting law firms to issue advisories about the risks of using AI tools during legal disputes.

The case that raised alarms

The issue came to light after a decision by US District Judge Jed Rakoff in Manhattan. The case involved Bradley Heppner, former chair of GWG Holdings, who faces charges of securities and wire fraud and has pleaded not guilty.

Also Read | 'Just turn around and not look': Amazon worker dies on warehouse floor as colleagues told to keep working

While preparing his defence, Heppner used Claude to draft reports, which he later shared with his lawyers. Prosecutors argued these documents should not be protected, and the court agreed.

Judge Rakoff stated that no attorney-client relationship exists "or could exist, between an AI user and a platform such as Claude", adding that Claude itself "expressly provided that users have no expectation of privacy in their inputs".

Why privilege does not apply

Attorney-client privilege protects confidential communication between a lawyer and their client. However, legal experts point out that AI platforms are considered third parties.

Sharing case-related details with such tools may effectively waive that protection. Once information is disclosed beyond a lawyer, it can potentially be accessed by prosecutors or opposing parties.

Adding to the concern, AI companies like OpenAI and Anthropic note in their policies that user data may be shared with third parties.

What lawyers are advising

Following the ruling, more than a dozen US law firms have issued guidance to clients. A common recommendation is to avoid using consumer AI chatbots for sensitive legal matters.

Some firms, including O’Melveny & Myers, suggest opting for closed, corporate AI systems, though these approaches are still largely untested in court.

Also Read | Quick high, long-term damage? The truth behind the 'vodka eyeballing' craze

Lawyers also advise that if AI tools are used for research under legal guidance, users should clearly state that context in their prompts to reduce potential risks.

Articles you may like: