The Linux kernel project published its first official policy on AI coding assistants. The move comes as tools like Claude and Copilot become common in developer workflows, and the kernel community needed clarity on how to handle submissions that mix human and machine work.

The policy lives in the kernel's documentation. It's straightforward: AI-generated contributions follow the same development process as any other code, comply with GPL-2.0-only licensing, and include proper attribution.

AI agents cannot add Signed-off-by tags. Only humans can legally certify the Developer Certificate of Origin, the mechanism confirming a submitter has the right to contribute that code. The human submitter takes full responsibility. They review the AI-generated code, verify licensing, and sign off themselves. This isn't optional.

The policy adds a new "Assisted-by" tag for transparency. The format is "Assisted-by: Claude:claude-3-opus coccinelle sparse" where you name the AI tool, model version, and any specialized analysis tools used. Basic tools like git or gcc don't need listing. Simple attribution.

The kernel community didn't ban AI or treat it as special. They applied existing legal and process frameworks to a new situation. For a project with thousands of contributors and strict licensing requirements, that pragmatism matters. Other open-source projects, such as BSD projects, will probably copy this approach.