For fourteen years, Mike LeBlanc served in the Marine Corps. Now he makes robots that fight. His company, Foundation, has built the Phantom MK-1 — a weapon-capable humanoid in matte black that Foundation describes as the first robot of its kind designed explicitly for defense applications. The distinction matters to LeBlanc: he wants a machine that can go where soldiers go, absorb the hits soldiers absorb, and operate where soldiers can't.

Foundation holds $24 million in combined contracts with the U.S. Army, Navy, and Air Force and carries an SBIR Phase 3 designation — a classification that makes it an approved military vendor without additional competitive bidding. Two Phantom units have been deployed to Ukraine for frontline reconnaissance, and what LeBlanc witnessed there reshaped his understanding of the battlefield. "Really shocking," he told TIME. In Ukraine, he said, robots have become the primary combatants; humans are increasingly in support roles. That inversion — the soldier as technician rather than fighter — was not something he encountered in Afghanistan.

Where Foundation is competing for the hardware layer, Scout AI is building for the decision layer. At a recent Pentagon demonstration, the company's Fury AI Orchestrator ran seven autonomous agents through a complete kill chain — from identifying a target to authorizing engagement — with no human in the loop at any stage. Scout AI is now in negotiations for $225 million in Department of Defense contracts. The demonstration matters less for its scale than for what it implies: not a single robot acting independently, but a networked system of agents making lethal decisions collectively, at speeds no human operator can match.

The field Scout AI and Foundation are competing in has expanded fast. Anduril and a cluster of other defense-tech firms are drawing serious Pentagon investment, and the capital now flowing into autonomous military systems carries a different set of expectations than the autonomous-vehicle or industrial-robotics sectors these companies partly grew out of. The customers want weapons.

The legal and ethical questions are not new, but they are arriving faster than the frameworks built to handle them. Jennifer Kavanagh of Defense Priorities points to a norm already eroding in Ukraine: AI-powered drones, when Russian jamming severs their link to remote operators, are autonomously selecting and engaging targets. Human control isn't being debated away in policy papers — it's being removed in the field, by circumstance, without any formal revision to the rules. The International Committee for Robot Arms Control has warned that fully autonomous weapons diffuse accountability for war crimes in ways existing international law doesn't address. U.N. Secretary-General António Guterres has called lethal autonomous weapons systems "morally repugnant."

The political context sharpened on February 28, when President Trump signed an order directing federal agencies to cease procurement from Anthropic. Contract negotiations had broken down after Anthropic insisted on two clauses: one barring its technology from being used to surveil American citizens, the other prohibiting its use in autonomous weapons programmed to kill without human oversight. The White House rejected both. Those restrictions, as written, currently align with existing Pentagon guidelines on autonomous systems — which makes the order difficult to read as purely procedural. For commercial AI companies watching, it poses a concrete question about how much constraint is acceptable before it costs them federal business.

Neither Foundation nor Scout AI uses Anthropic's models, so the order has no direct bearing on their contracts. But the signal it sends to the broader autonomous-weapons sector is plain enough.