You’re under investigation. An audit letter lands. A whistleblower files a complaint. Before you pick up the phone to call your attorney, you do what millions of people now do instinctively: you open ChatGPT, or Claude, or Gemini, and start typing.
“What are the penalties for an Anti-Kickback violation?”
“Can they subpoena my pharmacy records?”
“How bad is a Stark Law self-disclosure?”
It feels like private research. It is not. A federal court just issued a ruling that every one of those prompts, and every AI-generated response, can be seized, subpoenaed, and used against you, with no privilege protection whatsoever.
The Ruling: United States v. Heppner
On February 17, 2026, Judge Jed S. Rakoff of the U.S. District Court for the Southern District of New York issued what appears to be the first ruling directly addressing whether a user’s communications with a publicly available generative AI platform are protected by attorney-client privilege or the work product doctrine.
The facts are straightforward — and instructive. Bradley Heppner, a former financial services executive, learned he was the target of a federal criminal investigation. On his own initiative, he turned to the consumer version of Anthropic’s AI chatbot, Claude, and generated approximately 31 documents of prompts and AI-generated responses analyzing his legal exposure. He incorporated information his own attorneys had conveyed to him. He then shared the AI outputs with his defense counsel.
When FBI agents arrested Heppner, they searched his home and seized his devices, and the AI documents with them. His lawyers claimed privilege. The government disagreed.
Judge Rakoff sided with the government on every point.
Why the Court Said “Not Privileged”
The three factors to establish attorney-client privilege are that communications must be (1) between a client and their attorney; (2) intended to be, and in fact, kept confidential, and (3) for the purpose of obtaining legal advice (id at p. 4). The court determined that Heppner’s communications were not privileged based on the following:
- An AI chatbot is not an attorney.The attorney-client privilege applies to communications between a client and counsel. Claude is a piece of software, not a licensed attorney — and it tells users exactly that when asked for legal advice.
- There was no confidentiality.This is the element that should alarm every executive reading this post. Anthropic’s privacy policy states that user inputs and outputs may be used for model training and can be disclosed to third parties, including governmental regulatory authorities. When you agree to those terms and start typing, you have — in the court’s eyes — surrendered any reasonable expectation of privacy.
- The purpose was not to obtain legal advice.While the court acknowledged that this was a closer call, it ultimately determined Heppner was not seeking legal counsel fromClaude because Claude disclaims providing legal advice. He was conducting independent research. The fact that he later shared AI outputs with his attorneys did not retroactively create privilege. As Judge Rakoff wrote: “It is black-letter law that non-privileged communications are not somehow alchemically changed into privileged ones upon being shared with counsel.”
The work product doctrine fared no better. Because Heppner acted on his own initiative — not at the direction of counsel — the AI documents did not reflect counsel’s mental impressions or litigation strategy. No attorney direction, no work product protection.
The Footnote That Should Keep You Up at Night
Here is the detail most commentaries are burying: Judge Rakoff noted in a footnote that if Heppner had input information originally conveyed by his attorneys, the privilege over those underlying attorney-client communications could itself be waived — because Heppner disclosed that information to a third party (the AI platform).
Read that again. A single client AI session does not just fail to create privilege. It can destroy the privilege you already had over your conversations with your actual attorney.
Why Healthcare Clients Face Heightened Risk
If you operate in healthcare, the stakes of Heppner are amplified by the regulatory environment you already navigate:
- OIG and DOJ investigations into False Claims Act violations, Anti-Kickback Statute issues, and Stark Law compliance involve extraordinarily sensitive facts. Typing those facts into a public AI tool creates a discoverable roadmap to your exposure — built by your own hand, with zero privilege shield.
- HIPAA and 42 CFR Part 2 impose strict confidentiality obligations on protected health information (“PHI”) and substance use disorder records. Inputting PHI into a consumer AI platform may not only waive privilege — it could independently constitute a reportable breach under federal privacy law.
- State licensing boards, DEA inquiries, and Medicare audits generate the kind of anxiety that sends people to AI chatbots at 2 a.m. But those late-night sessions are creating permanent, discoverable records on third-party servers.
And finally, in any litigation action, you should assume that opposing counsel is savvy and has read or heard about this case. Expect there to be many more subpoenas directed at AI companies for your data, and at you to determine which AI you use and to ask you to turn over your history.
The Door Judge Rakoff Left Open
Critically, the court did not rule that all AI-assisted legal work is unprotected. Judge Rakoff explicitly suggested the analysis “might arguably” be different if:
- Counsel directed the client to use the AI tool as part of a structured legal engagement — potentially treating the tool as an agent of the attorney under the Kovel doctrine; and
- The work was performed under attorney supervision, reflecting counsel’s strategy and mental impressions.
This distinction is the key takeaway. There is a world of difference between a client independently querying a free chatbot at home and using an AI tool when supervised and directed by counsel. Heppner penalizes the former.
What You Should Do Right Now
Stop using your own AI tools to analyze your legal exposure. If you are facing any type of government investigation, compliance audit, licensing inquiry, or legal dispute, keep the facts off your AI tool. Period.
Do not input privileged information into consumer AI platforms. One AI session can unravel privilege over your entire attorney relationship, not just the AI documents.
Bring the raw facts to your attorney first. Under the absolute protection of attorney-client privilege, let counsel determine how, and whether, to leverage AI tools safely, on secure platforms, with proper confidentiality controls.
Work with attorneys who understand both your regulatory landscape and AI technology. The right approach is not to avoid AI entirely. It is to use AI through counsel, under documented protocols, with enterprise-grade protections. That is exactly the approach we take at The Health Law Partners (HLP).
Frequently Asked Questions: AI and Attorney-Client Privilege After Heppner
Q: Are my conversations with ChatGPT or Claude about a legal matter privileged?
A: Under the Heppner ruling, no — not if you are using a public, consumer AI platform on your own initiative.
Q: What if I only used AI to organize my thoughts before talking to my lawyer?
A: That argument failed in Heppner. Forwarding AI outputs to counsel after the fact does not convert them into privileged communications.
Q: Can inputting attorney-client communications into AI waive my existing privilege?
A: Yes. Judge Rakoff flagged that disclosing information received from counsel to a third-party AI platform could waive privilege over the underlying attorney communications.
Q: Is there any way to use AI in a privileged legal workflow?
A: Potentially — the court left the door open for attorney-directed use of secure, enterprise AI platforms under confidentiality controls, analogous to the Kovel doctrine for non-lawyer agents of counsel.
The Health Law Partners, P.C. (HLP) advises healthcare providers, facilities, and executives on regulatory compliance, government investigations, HIPAA, and operational matters, leveraging advanced technology within secure, privileged frameworks. If you have questions about AI use in the context of a legal matter, or if you are facing a healthcare regulatory investigation, contact Clinton Mikel (cmikel@thehlp.com) or your regular HLP attorney at (248) 996-8510.