The Debian Project has stepped back from formalizing any policy on AI-generated contributions following a two-week debate in February and March 2026. Developer Lucas Nussbaum opened the discussion by circulating a draft General Resolution that would have required explicit disclosure when a significant portion of a contribution was AI-generated without manual modification, mandatory tagging with labels like '[AI-Generated]', and a prohibition on feeding non-public project data — including embargoed security reports — into generative AI tools. Despite inviting feedback before formally submitting the GR, the proposal never advanced, and Debian will continue handling AI-assisted contributions on a case-by-case basis under existing policies.

A core obstacle was definitional. Developer Russ Allbery argued that the term "AI" is too broad and semantically unstable to serve as the basis for durable policy, calling instead for precise language such as "LLM." Sean Whitton went further, suggesting that any resolution should distinguish between specific use cases — code review assistance, prototype generation, and production code — potentially permitting some while restricting others. Nussbaum pushed back, contending that the specific underlying technology mattered less than the broader question of automated code generation, but the project could not reach agreement even on that framing.

The debate also exposed disagreements over contributor dynamics, copyright exposure, and cost equity. Developer Simon Richter raised what he called the "onboarding problem": an AI agent can execute a basic task under a maintainer's guidance, but when the exchange is over nothing sticks — no skill carried forward, no contributor who shows up next month better than before. The maintainer spent the same time they would have spent on a junior human developer and got a one-time patch rather than a person. Others flagged <a href="/news/2026-03-14-john-carmack-pushes-back-on-open-source-training-restrictions">copyright and licensing risks</a> inherent to LLM-generated code, the environmental cost of running large models, and potential inequities for contributors who cannot afford paid AI tools. Nussbaum referenced a study co-authored by an Anthropic employee examining how AI use affects skill formation, suggesting the impact on contributors was more nuanced than critics allowed.

Debian mirrors a wider pattern in open source: projects confronting AI contribution policy without clear legal frameworks, community consensus, or agreed-upon vocabulary. As reported by Joe Brockmeier for LWN.net, Hacker News commentary on the thread highlighted that the crux of the issue is contributor accountability — whether someone used an LLM matters less than whether they exercised genuine due diligence over what they submitted. The Python Software Foundation ran into similar walls in 2025 when its own working group on AI-generated contributions dissolved without producing binding guidance. For Debian, the absence of a GR means the next dispute over an AI-assisted patch will land on a maintainer's desk with no policy behind them.