Iran's Islamic Revolutionary Guard Corps struck Amazon Web Services datacenters in the United Arab Emirates and Bahrain in late February 2026, the first deliberate attack on commercial cloud infrastructure in active warfare. Iranian Shahed 136 suicide drones hit at least two AWS facilities in the UAE and one near Bahrain, causing fires, power shutdowns, and extensive physical damage. Iranian state TV framed the strikes as retaliation for the datacenters' role in "supporting the enemy's military and intelligence activities" — a statement that puts Gulf states' technology partnerships with US companies squarely inside Iran's target set. The attacks knocked out payments, food delivery, and banking apps for roughly 11 million people across Dubai and Abu Dhabi, most of them foreign nationals.
Iran's own justification connects the drone strikes to a second, less visible dimension of the conflict. The datacenters Iran says enabled military operations against it are part of the same infrastructure that US forces have been using to run AI systems in the field. Anthropic's Claude has reportedly been deployed in the US-Israel military campaign against Iran, performing target identification, generating weapons recommendations, and producing legal justifications for strikes — a process The Guardian described as "bombing quicker than the speed of thought." <a href="/news/2026-03-14-anthropic-refuses-dow-demand-to-remove-ai-safeguards-declared-supply-chain-risk">Anthropic is in active dispute with the Pentagon over AI safeguards</a> even as its model is allegedly used in operations that have contributed to an estimated 1,000 or more Iranian civilian deaths. AWS, Anthropic, and the Pentagon did not respond to requests for comment before publication.
The Guardian's editorial board described Anthropic as "acting as one of the few public backstops against fully automated killing in Iran," adding that it was "a bizarre position for a private company that is not even accountable to shareholders on public markets."
Congress has not passed legislation governing AI in warfare. Multilateral constraints on autonomous weapons remain largely absent. Guardian journalist Nick Robins-Early, who broke the Pentagon-Anthropic dispute, reported that neither party believes a private company should hold decision-making authority over AI's military use — yet that is the role Anthropic now occupies by default. A company built around AI safety principles finds its flagship model embedded in live lethal operations it cannot fully control or withdraw from, with no sovereign authority to enforce its own policies against a state actor.
The drone strikes carry specific consequences for the Gulf-region AI buildout. Datacenters are now confirmed military targets, not a theoretical risk, and the exposure is physical: fire suppression failures, power cuts, hardware destruction. For hyperscalers and sovereign wealth funds with announced Gulf infrastructure investments, that changes site selection, insurance underwriting, and the terms of government partnerships in ways that will not be easy to paper over. The Anthropic situation poses a separate problem with the same root cause: the absence of binding legal frameworks means accountability for AI-assisted lethal outcomes is currently being contested in back-channel disputes between a startup and the United States military, with no clear authority to decide the outcome.