Anthropic just hit a wall in its push for Pentagon business. A federal court denied the company's motion to strip a "supply chain risk" label that's been hanging over it, the New York Times reports. The ruling keeps Anthropic in a category that makes winning sensitive government contracts much harder, if not impossible. A federal appeals court won't stop the military from blacklisting the AI company over its refusal to let Claude power surveillance or autonomous weapons. The stakes are billions in government contracts.
The label stems from Anthropic's heavy reliance on hardware and infrastructure it doesn't control. The company depends on Nvidia chips manufactured by TSMC in Taiwan for training its models, and it runs on cloud infrastructure from Amazon Web Services and Google Cloud, both major investors. Federal regulators see these dependencies as vulnerabilities when defense work is involved. Concentrating critical AI chip production in Taiwan, given the geopolitical tensions with China, makes policymakers nervous.
Every AI company building large models faces the same hardware bottleneck. Anthropic's case proves that having U.S.-based cloud partners doesn't automatically clear you with federal procurement standards. The government wants more than good intentions around data sovereignty and infrastructure integrity.
Courts won't override security classifications just because an AI company wants a defense contract.
For Anthropic, which has positioned itself as the responsible, government-friendly AI player, the ruling is a real setback. The company now has to address the underlying supply chain concerns or accept that certain federal deals stay out of reach.