The NSA is using Anthropic's most powerful AI model for cybersecurity work, even as the Department of Defense tries to cut the company off entirely. Two sources confirmed to Axios that the agency has access to Mythos Preview, a model so potent that Anthropic restricted it to roughly 40 organizations. The DoD blacklisted Anthropic in February as a "supply chain risk" and is currently in court trying to force vendors to drop the company. The military is essentially fighting itself: one arm says Anthropic threatens national security, while another arm uses its tools.
The fight started during contract renegotiations earlier this year. The Pentagon wanted Claude available for "all lawful purposes," which included mass domestic surveillance and autonomous weapons development. Anthropic said no. Some defense officials took that refusal as proof Anthropic can't be trusted when it matters. Anthropic says it's just holding ethical lines.
Mythos is built for offensive cybersecurity. It can autonomously work through software environments, reverse-engineer proprietary systems, and find zero-day vulnerabilities. The model is fine-tuned on exploit code and attack frameworks, generating functional attack vectors and simulating advanced persistent threats with minimal human guidance. A weaponized penetration testing system operating at a scale human red teams can't match.
Anthropic CEO Dario Amodei met with White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent last Friday to sort this out. Sources described the discussion as productive, with next steps focused on how agencies outside the Pentagon can engage with Mythos. When you build the best tool for the job, even the people trying to ban you will find a way to use it.