Judge Rakoff of the Southern District of New York just ruled that attorney-client privilege doesn't cover your AI conversations. In US v. Heppner, a defendant used Anthropic's Claude to draft legal documents without asking their attorney first, then handed those AI-generated materials to their lawyer. The court said privilege didn't apply. It pointed to Claude's Terms of Service as part of its reasoning.

The ruling turns on specific facts. The defendant used Claude "in lieu of an attorney," not as a tool their lawyer told them to use. If counsel had directed the client to use AI, the outcome might have been different. But Judge Rakoff didn't rule that out explicitly. The decision leaves the door open on whether attorney-requested AI assistance could qualify for privilege or work product protection.

None of the major AI providers explicitly protect attorney-client privilege in their standard terms. OpenAI, Google, Microsoft. They all reserve broad data usage rights. Consumer versions of ChatGPT, Copilot, and Gemini can use your inputs to train their models. Enterprise tiers offer better data protections, but even those don't mention privilege by name. Only specialized legal tools like Harvey AI have started addressing this gap directly.

This case sets a first precedent. The message to lawyers is clear. Stop treating AI tools like confidential extensions of your practice. If you're putting client information into a chatbot without reading the terms of service, you're taking a real risk. Right now, the courts won't bail you out.