The Free Software Foundation announced on March 14, 2026 that Anthropic's LLM training data included "Free as in Freedom: Richard Stallman's Crusade for Free Software," a book the FSF holds copyright on under the GNU Free Documentation License. The FSF, which published the work alongside O'Reilly Media, argues that the GNU FDL's copyleft provisions mean any derivative work — including, in the FSF's interpretation, an <a href="/news/2026-03-15-uk-society-of-authors-human-authored-logo">LLM trained on the material</a> — must itself be distributed freely under compatible terms. The statement was authored by FSF representative Krzysztof Siewicz, not Richard Stallman himself, a distinction noted in early community discussion of the announcement.

Rather than filing suit directly, the FSF cited its limited resources as a small nonprofit and instead issued a public warning: if it were to join the existing Bartz v. Anthropic lawsuit — in which a group of authors sued Anthropic in 2024 over training data copyright infringement — it would seek "user freedom" as compensation rather than monetary damages. Specifically, the FSF would demand that Anthropic release complete training inputs, model weights, training configuration settings, and accompanying source code freely to all users. The announcement carried the deliberately tongue-in-cheek headline: "The FSF doesn't usually sue for copyright infringement, but when we do, we settle for freedom."

Unlike every other copyright suit against AI companies, which have targeted monetary damages, the FSF's demand turns the remedy into an open-sourcing requirement. That distinction matters enormously for Anthropic's commercial model. The underlying legal theory — that LLMs constitute derivative works under copyleft licenses like the GNU FDL, GPL, or AGPL — remains untested in court, but if accepted, it could reach well beyond Anthropic to <a href="/news/2026-03-15-bytedance-suspends-seedance-2-0-video-ai-launch-amid-copyright-disputes">every major AI lab</a> that trained on internet-scraped data containing copyleft-licensed content.

"The copyleft angle is the most interesting part of this," said Pamela Samuelson, a copyright scholar at UC Berkeley School of Law who has written about AI training data disputes. "Courts haven't grappled with whether a statistical model trained on text is a 'derivative work' in the traditional sense. If one decides it is, the implications ripple across the entire industry." Bartz v. Anthropic is currently in settlement discussions — the outcome of which may determine whether the FSF's threat ever gets tested in court at all.