Lucas Nussbaum posted his draft General Resolution to the Debian mailing lists in mid-February and, by any reasonable measure, kicked off a genuine argument. His proposal was specific: mandatory disclosure when AI tools contributed a significant portion of a patch, machine-readable '[AI-Generated]' tags, full contributor accountability for code quality and license compliance, and a hard ban on feeding non-public Debian data — embargoed security reports, for instance — into any generative AI system.
Nussbaum framed the push as analogous to Debian's past decisions on BitKeeper and proprietary security tools, arguing the project needed a stated position rather than letting individual maintainers handle cases however they felt like it. What followed was a thread that established, fairly definitively, that Debian developers do not agree on very much when it comes to AI.
The first fight was over words. Russ Allbery said 'AI' was too vague to anchor durable policy and pushed for 'LLM' or more specific terminology. Sean Whitton wanted the resolution to break down distinct use cases — using a model to review code, generating a prototype, shipping model output as production patches — rather than bundling everything under one umbrella. Andrea Pappacoda and Gunnar Wolf backed the precision-first camp. Nussbaum argued back that what mattered was the functional outcome — automated code generation — not taxonomic debates about how to label the underlying technology. Nobody moved.
The terminology fight was almost beside the point compared to the harder question of what LLM contributions actually do to an open-source project over time. Simon Richter put it plainly: when a human newcomer submits a bad patch, the project invests effort in correcting them, and sometimes that investment pays off as the contributor grows into a maintainer. An AI agent doesn't accumulate skills or stick around. The return on mentorship is zero. Nussbaum cited a study co-authored by an Anthropic employee and a participant in the company's fellows program examining how AI use affects skill formation — a pointed reference, given that Anthropic builds the tools at the centre of this debate. Ted Ts'o pushed back, questioning whether AI-assisted contributions would actually thin the pipeline of future human contributors. Code quality, reproducibility, and the ethics of models trained on open-source work also came up, without resolution.
No GR was filed. Debian will handle AI-related contribution questions the same way it handles most awkward edge cases: existing policy, case by case, maintainer discretion. That may be the pragmatic outcome when the technology is moving faster than the vocabulary used to describe it — but it also means the question will return, probably in a more pointed form, the next time a significant AI-assisted contribution lands in the wrong place.