George Rosamond wants to know how AI is changing BSD development, and he's using the NYC*BUG mailing list to get answers. The thread, which will feed into a summer 2026 presentation, asks hard questions about whether BSD projects need explicit AI policies, how tools like Claude Code and AWS Bedrock are reshaping daily workflows, and what individual contributors should consider before feeding their code problems to an LLM.

NetBSD already has an answer on the policy front. Their Commit Guidelines explicitly classify code from GitHub Copilot, ChatGPT, or Meta's Code Llama as "tainted code" that needs written approval from core developers before it can land in the repository. The policy cites licensing and authorship concerns, plus a growing headache: people using LLMs to discover alleged CVEs just to pad their credentials. It's a strict stance that treats AI assistance as a compliance problem requiring extra vetting.

The Linux kernel crowd sees things differently. Rather than banning AI tools outright, they rely on the Developer Certificate of Origin. Linux maintainer Greg Kroah-Hartman has made clear that if a human signs off on code, they're on the hook for it being legal and original, regardless of how it was written. One approach treats the tool as the risk. The other puts the burden on the human behind the keyboard.

Rosamond's thread raises practical questions too, like whether BSD projects should use LLMs for vulnerability discovery or shell integration (he quips "oh, please no" to that last one). But the real tension cuts deeper. BSD contributors have to decide: treat AI-generated code as suspect by default, or trust that the human signing the commit knows what they're submitting. The answer will shape how these projects write software for years.