The nuclear deterrence-collapse scenario — where sufficiently advanced AI gives one nation total visibility over an adversary's arsenal and enables a disabling first strike — has become a fixture of AI safety conferences and think-tank papers. Sam Winter-Levy and Nikita Lalwani at the Carnegie Endowment for International Peace have spent considerable time stress-testing that scenario. Their conclusion, laid out in a recent 80,000 Hours podcast conversation: the scenario is probably wrong. The thing worth actually worrying about is different, and in some ways harder to fix.
Their starting point is arithmetic that doesn't care about compute budgets. A first strike only neutralizes an adversary's retaliatory capability if it's close to 100% effective — and nuclear arsenals are architected precisely to make that impossible. Submarines running silent in deep water, road-mobile launchers cycling through tunnel networks, hardened silos that require a near-direct hit: the physical dispersal problem is the deterrence, and AI doesn't obviously solve it. The researchers examine the four capabilities most often cited as AI-enabled threats — anti-submarine warfare, mobile missile tracking, missile defense, and attacks on command-and-control networks — and find that AI advances each of them without closing the decisive gap.
Where the analysis sharpens is on instability rather than collapse. Arms races don't wait for one side to achieve first-strike capability; they run on perception of emerging vulnerability. As AI capabilities develop unevenly and in ways that are hard for adversaries to assess from the outside, states are likely to over-respond — building and deploying more, faster, under conditions of higher uncertainty. The researchers flag fast-takeoff scenarios specifically: precisely the development trajectories that AI labs debate internally are the ones that, in a geopolitical context, create the shortest windows between strategic stability and dangerous ambiguity.
There is also a fog-of-crisis problem that gets less attention outside military circles. Technological superiority does not translate cleanly into political leverage. A nation that believes it has an advantage still has no clean mechanism for coercing an adversary without risking the very exchange it is trying to avoid. The gap between capability and usable coercive power is where crises historically spiral — and AI narrows that gap in some places while widening it in others.
The closing argument in the conversation carries particular weight for anyone working in the AI industry right now. The people building systems now being deployed in intelligence analysis, logistics optimization, targeting support, and cyber operations share almost no professional overlap with the people who understand nuclear risk. That isolation made more sense when AI was an academic curiosity. It makes less sense when agent systems are moving into operational military roles — and it makes least sense in a fast-takeoff scenario, where the same capabilities AI labs are racing to build could be destabilizing a strategic environment that nobody in the lab has ever thought carefully about.