Shaun Warman has written an essay that should make U.S. frontier lab investors uncomfortable. His argument is direct: American AI was financed on the assumption that frontier models would become a monopoly business, commanding monopoly-grade pricing. Open-weight models from Chinese labs are breaking that assumption. DeepSeek, Qwen, Kimi, and GLM have closed the performance gap to six to twelve months. The cost gap runs ten to thirty times in open weights' favor. A model DeepSeek reportedly trained for $5.6 million in compute competes with U.S. equivalents that cost $500 million to $1 billion. With the open-source deployment stack (vLLM, llama.cpp, Ollama, LangChain), these models run on rented hardware without vendor permission.

Warman puts planned U.S. AI infrastructure spending at roughly $1 trillion over four years. That bet requires high-margin rents to service. Commodity pricing doesn't cover it.

When technology fails to create scarcity, capital manufactures it through other means. Warman sees regulatory enclosure coming: Chinese open weights restricted under national security justifications, creating a split global market where domestic users pay closed-lab premiums while the rest of the world routes around U.S. infrastructure entirely. He also expects frontier labs to absorb their customers through vertical integration, becoming operators rather than just model providers.

The Hacker News discussion raised a counter-strategy. Commenters suggested frontier labs could withhold their most advanced models from public release, keeping them internal for recursive self-improvement rather than selling API access. A retained "teacher" model generating synthetic data and automating alignment could create a genuine capability gap that open fine-tuning cannot replicate. The publicly available frontier would always be a generation behind what exists inside the lab.

Warman's advice to developers is pragmatic: build on the open commons now while regulatory barriers are still forming, and architect for jurisdictional flexibility before that choice becomes involuntary.