A March 2026 commentary from the R Street Institute, authored by Logan Seacrest, surveys the fast-spreading use of generative AI across legal practice and the growing pains courts face trying to keep up. The piece opens with a landmark ruling from Southern District of New York Judge Jed Rakoff, who determined that AI-generated documents are not protected by attorney-client privilege. The case centered on Bradley Heppner, a former CEO facing a $300 million securities fraud investigation, who used a commercial AI chatbot to brainstorm defense strategies under the mistaken assumption the conversations were confidential. Rakoff's ruling was unambiguous: chatbots are not lawyers, and AI's novelty does not exempt it from longstanding legal principles.
The hallucination problem in legal filings has reached a scale that is difficult to ignore. A tracking database has logged nearly 700 incidents of AI-fabricated content in U.S. court filings since early 2025, with courts responding with escalating sanctions — a $5,000 fine in Alabama for a team that submitted invented caselaw, and a comparable incident in Wisconsin that resulted in license suspensions. The problem is not confined to lawyers using general-purpose chatbots: purpose-built legal research platforms Westlaw AI and Lexis+ AI have also been documented producing inaccuracies, suggesting something systemic rather than mere user error. In one Nevada County, California case, defense attorneys argued that AI-generated errors constitute an existential threat to due process, prompting the local district attorney to appoint a dedicated AI policy coordinator and issue a formal office-wide directive.
Not all applications are cautionary tales. The Los Angeles County Public Defender's Office uses AI to parse incoming police reports, standardize formats, and surface relevant information, saving thousands of staff hours annually. The Montgomery County, Texas District Attorney uses AI for document summarization, translation, and large-dataset analysis, while AI-assisted body camera video review now touches more than 80 percent of criminal cases. Seacrest frames these gains through the economics of "cost disease" — the decades-long trend making labor-intensive legal services increasingly unaffordable — and argues AI could help reverse the decline in trials by lowering litigation costs and improving access to justice.
Efficiency gains are meaningless without governance infrastructure to support them. The King County Prosecuting Attorney's Office in Seattle is held up as a model, having declared it would not accept AI-assisted police narratives as evidence given unresolved audit-trail concerns. Seacrest argues that prosecutors and public defenders' offices need formal <a href="/news/2026-03-14-anthropic-institute-societal-economic-governance">AI use policies</a> — covering confidentiality implications, tool vetting, and accountability mechanisms — before institutional dependence on AI takes hold. The Heppner ruling settled that chatbots are not lawyers. What it did not settle is what happens when law offices restructure themselves around tools that hallucinate at a documented rate of nearly 700 incidents per year. That is the question Seacrest is pressing, and no court has answered it yet.