A WIRED investigation published March 13, 2026 by reporter Caroline Haskins has provided the most detailed public account to date of how Anthropic's Claude AI model is being used inside US military intelligence platforms built by defense contractor Palantir. Drawing on software demos, public documentation, and Pentagon records, the report details how AI chatbots embedded in Palantir's Maven Smart System and Army Intelligence Data Platform help military analysts interpret satellite imagery, nominate targets for bombardment, generate courses of action, and produce intelligence assessments. Cameron Stanley, the Pentagon's chief digital and artificial intelligence officer, confirmed at a recent Palantir conference that Maven is deployed "across the entire department," spanning the Army, Air Force, Space Force, Navy, Marine Corps, and US Central Command. Claude reportedly contributed to the US operation that resulted in the capture of Venezuelan president Nicolás Maduro in January 2026, and continues to be used in defense operations in the ongoing war in Iran.
The reporting surfaces amid an escalating legal confrontation between Anthropic and the Trump administration. In late February 2026, Anthropic refused to grant the Pentagon unconditional access to Claude, specifically objecting to its use in mass surveillance of Americans or fully autonomous weapons systems. The Department of Defense responded by designating Anthropic's products a "supply-chain risk," prompting Anthropic to file two lawsuits this week alleging illegal retaliation. Despite the dispute, Claude appears to remain operational within Palantir's systems through the November 2024 partnership that made the model available inside Palantir's Artificial Intelligence Platform (AIP). Both Palantir and Anthropic declined to comment on WIRED's reporting, and the Defense Department did not respond to a request for comment.
The investigation exposes a structural enforcement gap with real consequences for the AI agent ecosystem. Palantir's AIP is designed as a middleware layer that runs inside existing platforms like Foundry or Gotham, allowing customers to supply their own classified training data and choose between competing LLMs including Claude, GPT-4.1, and Meta's Llama. Once Claude is embedded within this architecture, Anthropic has no runtime telemetry, no audit log access, and no direct visibility into how the model is queried or what outputs it generates. Maven's "AI Asset Tasking Recommender" — a tool that proposes specific bombers and munitions for specific targets — illustrates the ambiguity at the heart of Anthropic's usage restrictions: because a human analyst technically reviews recommendations before acting, the workflow may satisfy contractual human-in-the-loop requirements even while enabling operationally autonomous targeting at machine speed.
The problem isn't unique to Anthropic. Commercial AI developers license models to contractors who sell to government agencies, creating a multi-tier structure that makes usage policies difficult to enforce — particularly when classification barriers prevent the original developer from auditing downstream behavior. The Palantir-Anthropic situation tests whether safety commitments made at the model layer can survive contact with the national security supply chain. This reporting suggests the answer is largely no, regardless of the intentions of the model provider.