On Sunday, 9 March, Iran's Islamic Revolutionary Guard Corps struck two Amazon Web Services datacenters in the United Arab Emirates and damaged a third in Bahrain with Shahed 136 suicide drones. AWS cut power to the affected facilities and services went down; the company says around 11 million users lost access to banking apps, ride-sharing platforms, and food delivery services. Iranian state television described the targets as infrastructure supporting "the enemy's military and intelligence activities." Amazon has since advised Gulf-region clients to consider migrating data out of those facilities.

The strikes appear to be the first confirmed military attack on commercial cloud infrastructure by a nation-state — a threshold the industry has long treated as theoretical.

The attacks landed the same week as separate reporting on a different kind of AI involvement in the same conflict. Anthropic's Claude has reportedly been deployed in an operational capacity in the US-Israel military campaign against Iran, helping to identify and prioritise bombing targets, recommend specific weapons systems, and assess the legal justification for individual strikes. The sourcing on this claim is not fully transparent, and Anthropic has not confirmed it; if accurate, it would mark the first known integration of a large language model into live kinetic targeting. The deployment has been described in reporting as enabling an operational tempo "quicker than the speed of thought."

The Claude reporting has fed a very public dispute between Anthropic and the Department of Defense over AI safeguards. The company — structured as a Public Benefit Corporation with major investors including Google and Amazon — has found itself contesting how the military uses its own model, a role its terms of service were never written for. Anthropic is not without institutional accountability; it has investors and board obligations. But it operates without any specific regulatory framework governing military use of its products, and neither does any other major AI lab.

That gap runs through both stories. The drone strikes expose the systemic risk of routing AI infrastructure through a small number of hyperscaler regions in geopolitically exposed locations. Eleven million users disrupted in a single afternoon from two facility hits is not an edge case — it is a design question the industry has repeatedly deferred. The Claude targeting reports, if they hold, raise harder issues: model accuracy under adversarial conditions, training data bias when targeting decisions are downstream, and what corporate accountability looks like when there is no legal standard to measure against.

OpenAI and Google are facing similar scrutiny. None of the major labs have enforceable military-use policies; what exists are voluntary commitments that defence departments can interpret broadly. Congress has not passed meaningful legislation governing autonomous weapons systems. The legal framework currently governing AI in warfare is, for now, whatever the lawyers in the room can agree on.