Redox OS updated its CONTRIBUTING.md on GitLab this week to prohibit code generated by large language models — GitHub Copilot, ChatGPT, and similar tools are named explicitly — and to require contributors to sign a Developer Certificate of Origin attesting that submissions are their own original work.

The project, a microkernel operating system written in Rust, has built its reputation on correctness and auditability. Kernel maintainers need to be able to read and vouch for every commit. That accountability gets complicated fast when contributors pass off model output as their own, and when the training provenance of that output remains legally uncertain.

Redox isn't the first to draw this line. Daniel Stenberg, the creator of curl, has been consistently vocal about rejecting AI-generated patches, arguing they shift review burden onto maintainers while reducing the submitter's own understanding of the code. Linux kernel stable-tree maintainers have raised similar concerns, and a number of cryptography libraries have quietly tightened their contribution guidelines in the same direction over the past year.

What distinguishes Redox's move is the formality. Encoding the ban in a DCO-backed policy — rather than leaving it to per-PR judgment calls — means contributors are on notice from day one and maintainers have documented grounds to close non-compliant submissions without debate.

The split across open source is real and widening. Projects treating AI assistance as a productivity multiplier are moving one way; security- and systems-oriented projects where human authorship is load-bearing for the trust model are moving the other. For the latter group, knowing who wrote something — and that they actually understood it — is not incidental. Redox OS has now said so in writing.