Shaun Warman lays out an uncomfortable thesis: U.S. frontier AI labs were financed on the assumption they'd become monopolies. Roughly $1 trillion in committed capex rests on that bet. But open-weight models from Chinese labs like DeepSeek, Qwen, Kimi, and GLM are delivering comparable capabilities at single-digit cents on the dollar.
Six to twelve months. That's the performance gap, and it's shrinking fast.
The moat the capital structure requires? Technology isn't providing one.
Warman argues that when technology fails to create scarcity, capital manufactures it through other means: regulatory enclosure, vertical integration, bundled distribution. His predictions for the next eighteen months are stark. Expect security-justified restrictions on Chinese open weights. Watch frontier labs stop selling models and start selling outcomes, absorbing their own customers as operators. What emerges is a split market where American users pay closed-lab pricing while everyone else routes around U.S. restrictions entirely.
Hacker News commenters pushed back. Some argued frontier labs could simply withhold their best models from public release, maintaining an R&D acceleration edge that open labs can't match. Others noted that open weights and open research were the AI norm before OpenAI's GPT-3, suggesting the moat thesis may be overstated. A different market split was proposed too: frontier labs serve enterprises needing top-tier performance and compliance cover, while open-source models handle everyone else. Model harnesses and Model Context Protocol (MCP) might let cheaper open models compete through clever design rather than raw intelligence.
DeepSeek itself shows the asymmetry Warman describes. It emerged from High-Flyer, a Chinese quant hedge fund that already had 10,000 Nvidia A100 GPUs and over $1.4 billion in assets before shifting to AI research in 2023. Pre-existing infrastructure and capital meant DeepSeek could open-source weights aggressively to disrupt global pricing without chasing immediate revenue. Its quant heritage shows in architectures like DeepSeek-V2, which uses Mixture-of-Experts design to slash inference costs.
It's a different kind of competitor than a VC-backed startup racing to monetize.