Redox OS, a microkernel operating system written in Rust, has adopted a Developer Certificate of Origin (DCO) policy alongside a strict ban on LLM-generated code contributions. The project joins a handful of open source maintainers taking a formal stance against AI-assisted submissions — citing not just code quality, but the disproportionate review burden such contributions impose. AI-generated pull requests can appear superficially correct while still demanding the same scrutiny for correctness, security vulnerabilities, and license compliance as any other submission. The friction of writing a PR has historically served as an implicit quality filter; LLMs remove it.

The broader landscape of open source AI governance was recently mapped by researcher Phil Eaton, whose March 2026 survey of 112 major source-available projects found that only four — NetBSD, GIMP, Zig, and qemu — have formally banned AI-assisted contributions, while 71 had already accepted commits explicitly labeled as AI-assisted. High-profile projects across the full stack, including the Linux kernel, curl, Firefox, Chromium, Django, and MariaDB, have accepted AI-assisted work. Projects like DuckDB and Elasticsearch appear to have policies discouraging AI contributions yet have accepted them anyway — a gap between stated policy and actual practice that Eaton's data makes hard to ignore.

A central tension in the Hacker News discussion of the Redox OS announcement is the asymmetry between how maintainers and contributors are treated under such bans. Critics argue that prohibiting contributor use of LLMs while maintainers adopt the same tools creates a double standard that's difficult to justify. The practical argument for restrictions is direct: effort historically required to write a pull request served as an implicit quality filter, and LLMs strip out that friction. Eaton's survey suggests most projects haven't yet decided which side of that argument they're on.