Anthropic won't let you use its most powerful model. But Microsoft and CrowdStrike can. That's the core tension in Tanya Verma's essay "The Closing of the Frontier," and it's struck a nerve across the AI community. Verma argues the internet was the last place where a broke teenager had the same tools as a billionaire. Same encryption. Same shot at building something. She compares what's happening now to Frederick Jackson Turner's thesis about the American frontier closing in 1893. When frontier models get locked away from everyone except enterprise partners, that era of permissionless building ends. Frontier models get locked away from everyone except enterprise partners, that era of permissionless building ends. She quotes Rudolf Laine's warning that those with capital when labor-replacing AI started have a permanent advantage, since capital now converts into superhuman labor. George Hotz calls it neofeudalism. "If a small group of people have a monopoly on it, you are the permanent underclass in the same way animals are," he wrote.

The "Manhattan Project" framing that AI labs love doesn't hold, she argues. Nuclear weapons only destroy. Intelligence creates. Containment won't work the same way. And when only a handful of companies control the most capable intelligence, they're accumulating knowledge of vulnerabilities in everyone else's infrastructure. State-scale power without state-scale accountability.

Project Glasswing, Anthropic's initiative to secure critical software, sounds reasonable on its face. Get infrastructure teams access early, patch things before bad actors exploit them. But Verma points out the irony: CrowdStrike and Microsoft aren't exactly security exemplars. Both have had major breaches. Meanwhile, actual safety researchers can't get access. At a recent MATS symposium, two-thirds of research posters used Chinese open-source models because that's what's available. The people best positioned to study these systems are locked out, while corporations with spotty security records get priority.