The US Department of War's decision to declare Anthropic a 'supply chain risk' didn't emerge from nowhere. The company had refused to strip contractual language prohibiting its Claude models from being used for mass surveillance and autonomous weapons systems. That refusal, the DoW decided, wasn't a legitimate business position — it was grounds for potentially freezing Anthropic out of the entire defense ecosystem. Blogger and podcaster Dwarkesh Patel published a widely-circulated essay on March 11 calling it a historic inflection point. He's right, but not quite for the reasons he states.
Patel gives the DoW genuine credit: no military should become dependent on a private company that can pull a kill switch on mission-critical technology. He says he might personally have walked away from the Anthropic contract for exactly that reason. But there's a categorical difference between declining to do business and threatening to destroy a company for the terms it won't negotiate away. The supply-chain designation, if it holds, reaches well beyond Anthropic — Amazon, Google, Nvidia, and Palantir all integrate Claude into defense-adjacent products, meaning the entire industry faces an implicit ultimatum: drop your ethical constraints or risk designation.
What the DoW appears not to have considered — or considered and dismissed — is that the constraints it wants removed exist for reasons that go beyond corporate liability management. As autonomous AI systems move deeper into critical infrastructure, writing code, advising commanders, operating as weapons platforms, the question of who sets their ethical guardrails stops being a procurement issue and becomes something more fundamental. Patel invokes Stanislav Petrov, the Soviet officer who overrode an automated launch order, as a template for why AI systems retaining moral constraints are a feature rather than a liability. The analogy holds, though it undersells the scale problem: Petrov was one man making one decision. AI systems running across thousands of military applications will face analogous calls constantly, with no individual accountable for the outcome.
The odds Patel cites for the DoW's designation holding — 19%, sourced from prediction markets — are worth noting as an informed market signal, not treated as settled probability. What's more revealing is the gap between that number and the gravity of the precedent being contested. Even if the designation is overturned, the episode has already demonstrated that a government agency will attempt this kind of coercion. Other labs are watching. The implicit message to any AI company with safety constraints is clear: they're negotiable, under sufficient pressure.
Patel closes by pointing at the China framing — if the US is in a race against an authoritarian system that tolerates no private refusal of state power, coercing Anthropic imports the very principle it claims to be racing against. That's a clean rhetorical move, and accurate as far as it goes. But the more immediate problem isn't ideological coherence. It's that once safety constraints become subject to government override, the market incentive to build them in the first place collapses. Why invest in ethical architecture if it becomes a liability in your next defense contract negotiation? The DoW may believe it's solving a vendor-dependency problem. It's actually dismantling the commercial logic that makes AI safety constraints viable at all.