A New Yorker investigation by Gideon Lewis-Kraus, published March 14, 2026, details a high-stakes contract dispute between the Pentagon and Anthropic over the terms of Claude's deployment in military and intelligence operations.

Anthropic is the first AI lab certified to operate on classified US government systems. That distinction was engineered by CEO Dario Amodei deliberately. His reasoning: early engagement with national security agencies would let Anthropic shape the norms governing frontier AI in government contexts before those norms hardened around less safety-conscious competitors. Claude is not deployed as a standalone chatbot. It runs inside contractor platforms — most prominently <a href="/news/2026-03-14-palantir-demos-show-how-the-military-could-use-ai-chatbots-to-generate-war-plans">Palantir's intelligence workflow suite</a> — where it accelerates tasks like signal intelligence review and military target analysis. The original contract included two firm restrictions: Claude could not drive fully autonomous weaponry, and it could not facilitate domestic mass surveillance.

The dispute escalated when Emil Michael, Under-Secretary for Research and Engineering under Defense Secretary Pete Hegseth and a former Uber executive, reviewed the contract and pushed to renegotiate it around "all lawful uses" of Claude — language broad enough to cover autonomous weapons and bulk domestic surveillance. Anthropic refused. The Trump Administration responded by threatening to invoke the Defense Production Act to nationalize AI capabilities and to <a href="/news/2026-03-14-anthropic-designated-supply-chain-risk-federal-lawsuits">designate Anthropic as a supply-chain risk</a>, applying legal and regulatory pressure to force compliance. The conflict exposed a deep tension between what the Pentagon wanted and what Claude was built to do. The company's internal "soul doc" instructs the model to be "diplomatically honest rather than dishonestly diplomatic" and to act on sound judgment rather than blind instruction-following — a design philosophy that does not bend easily to "all lawful uses."

Elon Musk's Grok, developed by xAI, has since become the Administration's preferred alternative. In December 2025, the Pentagon added Grok to its new GenAI.mil platform. Hegseth then announced a partnership with xAI at SpaceX headquarters, declaring the Pentagon "will not employ AI models that won't allow you to fight wars" — a direct shot at Anthropic. Palantir remains the commercial integration layer through which AI models are selected and deployed for classified work, giving it unusual leverage over which labs win and which get sidelined. The outcome of this standoff will shape what role, if any, safety-constrained AI has in autonomous military decision-making.