A licensing dispute over a Python character-encoding library has become the AI era's most explosive open-source flashpoint. Dan Blanchard, maintainer of chardet — a library downloaded roughly 130 million times per month — released version 7.0 this week under a permissive MIT license, replacing its previous GNU Lesser General Public License (LGPL). His justification: the new version is a clean-room rewrite accomplished in approximately five days with the help of Anthropic's Claude, which is now formally credited as a project contributor. The rewrite delivered a 48x speed improvement, and Blanchard cited his own JPlag plagiarism analysis showing less than 1.3% structural similarity between version 7.0 and any prior release — with matched tokens limited to generic Python boilerplate like argparse and import blocks.
The move was swiftly contested by an individual claiming to be Mark Pilgrim, chardet's original creator, who opened a GitHub issue arguing that the maintainers' deep prior exposure to the LGPL codebase disqualifies the rewrite as a legitimate clean-room implementation — and that routing the rewrite through an AI tool confers no additional legal rights. The argument cuts to a core unresolved question: whether a large language model trained on copyleft-licensed code can ever produce output that is legally clean, regardless of how different the resulting source code appears on the surface. No court has yet ruled definitively on this question, leaving the dispute in a legal gray zone that could take years to resolve.
The reactions from the wider open-source community have been pointed. Bruce Perens, co-founder of the Open Source Initiative and author of the original Open Source Definition, declared the economics of the whole field finished: "The entire economics of software development are dead, gone, over, kaput." His argument is that AI can reconstruct any codebase from its test suite or documentation alone, making both proprietary and copyleft licensing models unenforceable in practice. The Free Software Foundation's Executive Director Zoë Kooyman pushed back, arguing that there is nothing clean about a model trained on the very code it is reimplementing, and warning that AI-assisted relicensing poses a direct threat to user freedom.
Flask creator Armin Ronacher was blunter: copyleft enforcement has always depended on the practical friction of human effort, and that friction is now gone. What the chardet case makes concrete is how unremarkable this process is becoming. A production-grade replacement for one of Python's most-downloaded libraries was written in under a week, with a documented performance gain, and the licensing obligation attached to the original code for decades did not survive it. As AI coding agents become routine contributors to open-source projects, the question of what a software license actually enforces — not what it says, but what any rights-holder can realistically compel — is moving from a theoretical concern to a practical one the courts will have to answer.