Stanford's 2026 AI Index dropped this week with some staggering numbers. Global AI investment hit $581 billion in 2025. US companies released 50 notable AI models. World AI compute capacity has grown 3.3x every year since 2022. That compute explosion is what makes AI agents possible, and it's still accelerating.

For people tracking agentic capabilities, the benchmark data is the real story. Multimodal models are advancing fast, and performance on agentic tasks is climbing with them. But the cost is real. Training a frontier model like xAI's Grok 4 can generate over 72,000 tons of carbon emissions. Ray Perrault, co-director of the AI Index steering committee, notes these figures rely on "inferred inputs drawn from public reporting" and should be interpreted with caution. Epoch AI independently estimates Grok 4's emissions at roughly 140,000 tons of CO2. Either way, capability gains come with a growing environmental price tag.

China installed 295,000 industrial robots in 2024. The US installed 34,200. That gap has been widening since 2012, long before the current AI wave. Anyone building physical agents or embodied AI should pay attention to where the hardware deployment is actually happening.

Industry now accounts for over 90% of notable model releases, up from under 50% a decade ago. Just seven came from academia or government. The compute costs of frontier training mean the agent ecosystem will sit on a handful of corporate platforms, and those platforms are drifting toward defense. OpenAI works with the Defense Innovation Unit. Anthropic tests with the intelligence community. Microsoft Azure and AWS hold massive Defense Department contracts. The consumer agent tools people build with may be subsidized by military infrastructure, whether developers like it or not.