On March 3, 2026, the U.S. Department of War formally designated Anthropic a supply chain risk under 10 U.S.C. § 3252 and the Federal Acquisition Supply Chain Security Act of 2018 (FASCSA) — the first time such a designation has been applied to an American company. The action stems from a breakdown in contract renegotiations over a July 2025 agreement under which Anthropic's Claude became the first frontier AI model approved for use on <a href="/news/2026-03-14-palantir-demos-show-how-the-military-could-use-ai-chatbots-to-generate-war-plans">classified U.S. government networks</a>. The Pentagon requested that Anthropic waive two contractual prohibitions embedded in that agreement: restrictions on using Claude for mass domestic surveillance and for fully autonomous weapons systems. Anthropic refused, citing its core safety commitments, and the Department of War escalated to a formal supply chain risk designation covering all Anthropic affiliates, products, and services. On February 27, President Trump had directed all federal agencies to cease using Anthropic's AI technology, with a six-month phase-out period, and Secretary Pete Hegseth announced the pending designation.
Under 10 U.S.C. § 3252, the Secretary of War can exclude a source from defense procurements involving national security systems when a written determination finds the action necessary to protect national security and that less intrusive measures are not reasonably available. FASCSA extends similar authority to the broader federal acquisition supply chain. Implemented through DFARS provisions 252.239-7017 and 252.239-7018, the designation requires defense contractors to mitigate supply chain risk in covered procurements — though it does not prohibit commercial or non-defense use of Anthropic products. The FAR 52.204-30 clause under FASCSA separately obligates contractors to monitor SAM.gov, conduct reasonable inquiries, and report use of covered articles, creating immediate compliance obligations for a wide range of government prime contractors and subcontractors.
On March 9, Anthropic filed lawsuits in two federal courts challenging the designation on multiple statutory and constitutional grounds. Law firm Mayer Brown, in a legal update authored by J. Ryan Frazee, Adam S. Hickey, and John Prairie, has published detailed compliance guidance for government contractors navigating the fallout, including contract review, notification duties, and potential re-sourcing of AI capabilities. The designation sets a direct precedent: a private company's <a href="/news/2026-03-14-anthropic-refuses-dow-demand-to-remove-ai-safeguards-declared-supply-chain-risk">usage policies</a> — specifically its refusal to enable mass surveillance and lethal autonomous weapons — can be legally framed as a national security supply chain risk, exposing any AI provider whose terms conflict with military or intelligence requirements to similar exclusion from federal procurement. Contractors currently using Anthropic products face a compliance clock tied to the six-month phase-out deadline set by the February 27 presidential directive.