John Carmack, id Software co-founder and active AI researcher, sparked a heated debate on Twitter/X this week defending AI companies' right to train models on open source code. Carmack drew on his own history of open sourcing classic id Software game engines — Doom, Quake — framing those releases as unconditional gifts. His argument is straightforward open source philosophy: permissive licenses grant genuine freedoms, and if an author wants to restrict commercial use, they should pick a license that says so. For Carmack, pushing back on AI training represents a misunderstanding of what open source was always supposed to allow.

Hacker News pushed back hard. Commenters pointed to a fundamental asymmetry Carmack's argument skips over: many developers contributed code to open repositories for community or non-commercial reasons. The fact that a license technically permits commercial use doesn't mean authors are indifferent to billion-dollar companies extracting value from their work at scale. <a href="/news/2026-03-14-anthropic-silent-ab-test-claude-code">Anthropic</a> and OpenAI — named explicitly in the thread — are generating substantial revenue from models trained on that code, while original contributors get nothing. Critics drew a sharp line between what is legally permitted and what is ethically uncontroversial, arguing those two categories are not the same.

The thread also dug into macro-economic anxieties that go beyond licensing. Commenter SirensOfTitan argued that technologists like Carmack are not seriously reckoning with the labor displacement implications of <a href="/news/2026-03-14-emacs-vim-ai-terminal-native-advantage">AI automation</a> hitting white-collar and knowledge workers — a group that, unlike blue-collar workers in previous automation waves, largely lacks union representation or collective bargaining power. The commenter drew a parallel to the 2008 financial crisis, warning that even modest AI-driven layoffs in the highly leveraged US services economy could trigger a broader cascade.

Running alongside those arguments was a more practical concern about the future of open contribution itself. Commenter arjie flagged that this tension is already reshaping attitudes toward sharing code and content openly, and may drive demand for new licensing frameworks — revenue-sharing arrangements or AI-training-specific restrictions — that more explicitly govern large-scale commercial use. Whether those frameworks get built through litigation, new license terms, or legislation remains genuinely unsettled, and the Carmack thread is one data point in a debate that open source maintainers are already having among themselves.