Google handed the Pentagon the keys to its AI models with almost no strings attached. The classified deal, reported by The Information, lets the US Department of Defense use Google's AI for "any lawful government purpose." The contract includes language about not using the tech for domestic mass surveillance or autonomous weapons without human oversight. But it explicitly strips Google of veto power over how the government actually uses it. Those restrictions are polite suggestions, not enforceable rules.
The deal also requires Google to help the government adjust AI safety settings on request. A Google spokesperson said the company is "proud to be part of a broad consortium of leading AI labs" supporting national security. That's a long way from 2018, when thousands of employees protested Project Maven, a Pentagon program using AI to analyze drone footage. Google backed down then and published principles pledging not to build AI for weapons or surveillance that violates human rights. Those principles still exist on paper.
Anthropic got blacklisted for refusing to remove guardrails around weapons and surveillance. Google, OpenAI, and xAI went the other direction, signing their own classified Pentagon deals. Hundreds of Google employees had signed an open letter urging CEO Sundar Pichai to reject the deal before it was finalized. They cited concerns the tech would be used in "inhumane or extremely harmful ways." The contract was signed anyway.
Building powerful AI systems and selling them to governments means giving up control over how they're used. You can write all the principles you want. Once the check clears, the government decides what "lawful" means.