The D.C. Circuit Court of Appeals just handed Anthropic a real loss. A three-judge panel rejected the company's emergency request to pause a Pentagon designation labeling it a "supply chain risk," a label that blocks defense contractors from using Claude or any other Anthropic model on military work. The ruling creates a weird split reality: a California federal judge temporarily blocked the same designation last month, but because the Pentagon operates under D.C. jurisdiction for procurement decisions, Anthropic has to fight on two fronts. The California win holds in that district. The D.C. loss holds for Pentagon contracts nationwide.

The whole mess started in February, when CEO Dario Amodei told Defense Secretary Pete Hegseth that Anthropic would not let the Pentagon use Claude for autonomous weapons or mass surveillance of Americans. Hegseth and the Trump administration responded by hitting Anthropic with a supply chain risk label, a tool never before used against an American company. In practice, that means any defense contractor caught running Claude, even on an air-gapped laptop at a classified facility, risks losing their Pentagon contracts. The appellate panel, including Trump appointees Gregory Katsas and Neomi Rao, acknowledged Anthropic will "likely suffer some irreparable harm" but said the government's need to control how it sources AI during an active military conflict outweighs the damage to one company's finances.

The immediate winners here are Anthropic's competitors. OpenAI, which dropped its ban on military use in 2024, is the obvious replacement for Pentagon contractors who need large language models and can't touch Claude anymore, a shift that contrasts with the divergent financial paths of the two AI firms. Microsoft Azure holds the Joint Warfighting Cloud Capability contract and already hosts OpenAI models for government customers. Palantir's AIP platform is built to run LLMs on classified networks. And companies like Anduril and Shield AI, which specifically build autonomous systems for lethal applications, are positioned to fill the exact use cases Anthropic refused to support. Open-weights models like Meta's Llama will likely see more defense adoption too, since contractors can deploy them without vendor restrictions.

Anthropic says it's "confident the courts will ultimately agree that these designations were unlawful," and the panel did call for an expedited decision on the merits. But the stay denial signals that at least two judges see the government's national security argument as strong, and that's a bad sign for a company trying to draw a hard line on how its models get used.