A Washington, D.C. federal appeals court won't stop the Pentagon from blacklisting Anthropic. Defense Secretary Pete Hegseth designated the company a "national security supply-chain risk" after Anthropic refused to let the military use Claude for surveillance or autonomous weapons. The label blocks Anthropic from Pentagon contracts and could spread government-wide, costing billions.
Reuters reports this is the first time a U.S. company has received this designation publicly under procurement statutes meant to guard military systems from enemy infiltration. Anthropic is fighting back with two lawsuits, claiming First Amendment retaliation for its AI safety stance. The courts are split. A California judge previously blocked one of Hegseth's orders, saying the Pentagon appeared to have unlawfully punished Anthropic for its views on responsible AI use.
Anthropic's competitors aren't holding back. OpenAI dropped its ban on military work in early 2024 and now partners with DARPA on cybersecurity. While the firm navigates internal challenges regarding AI safety, the Pentagon's multi-billion dollar Joint Warfighting Cloud Capability contract is anchored by Microsoft and AWS. Palantir crunches battlefield data for the Army. Anduril builds autonomous drones. Scale AI handles data infrastructure for military AI training. Anthropic stands alone among major AI labs in refusing military work, and the financial gap is widening fast.
The question hanging over this case is whether AI companies can actually enforce ethical red lines when the world's biggest customer demands access. If Anthropic loses, the answer is no.