The Pentagon has been using Anthropic's Claude for target selection, target prioritization, and battlefield simulation across multiple active US conflicts — including operations against Iran and Venezuela — according to a new episode of The Intercept Briefing with senior technology reporter Sam Biddle. The model isn't accessed directly. It runs through Palantir's Maven Smart System, an AI-enabled military intelligence platform that acts as the operational layer between the model and live targeting decisions.
That setup appears to have enabled a significant acceleration in airstrike tempo. During Operation Epic Fury — the US-Israel campaign against Iran — Biddle cites a conservative estimate of around 1,000 targets per day in the opening days, totaling roughly 4,000 in the first 100 hours. If those figures hold up, the pace is more than double the rate of Israeli strikes during the Gaza campaign.
Biddle's core argument cuts against the precision narrative the industry has built around AI-enabled warfare. In this context, he says, AI isn't making targeting smarter — it's making it faster. "Airstrikes, air war generally is already so prone to killing innocent people even when you take your time," he told host Akela Lacy. "Whenever you try to hurry for the sake of hurrying — and AI is great at enabling that — you just increase over and over again the chance of killing someone that you didn't intend to or didn't care enough to avoid killing." The intelligence-sounding product branding, Biddle argues, provides cover for what amounts to an accelerated kill chain with looser safeguards.
The Anthropic subplot is structurally the most revealing part of the story. After the company pushed back on how Claude was being used for lethal targeting and was effectively sidelined from a direct DoD contract, the Pentagon kept using the model anyway — routed through Palantir, which sits one vendor layer removed from Anthropic's direct oversight. The arrangement illustrates the limits of what an AI developer can actually enforce once its model is embedded in a multi-vendor defense platform. Usage policies don't travel well through procurement chains. OpenAI has since moved into the direct-contract space Anthropic vacated.
Palantir's Maven is now one of the highest-stakes LLM deployments running — not because of technical sophistication, but because the outputs are airstrike target lists. Secretary of Defense Pete Hegseth has publicly committed to sustained intensive bombardment, and the Trump administration has made AI integration across military operations a stated priority. The practical question at this point isn't whether frontier models will be used in lethal contexts. They already are. It's whether the companies that built them retain any meaningful say in how.