OpenAI wants Illinois to pass a bill that would let AI labs off the hook even if their models help cause mass casualties or billion-dollar disasters. SB 3444 would shield frontier AI developers from liability for "critical harms" as long as they didn't intentionally cause the harm and published safety reports. Critical harms means death or serious injury to 100+ people, or $1 billion+ in property damage. The bill covers models trained with more than $100 million in compute, which sweeps in OpenAI, Google, Anthropic, xAI, and Meta.

This is a shift for OpenAI. The company has mostly played defense against bills that could make AI labs liable for harms. Now they're pushing for explicit liability exemptions.

OpenAI spokesperson Jamie Radice says the approach "focuses on what matters most: reducing the risk of serious harm from the most advanced AI systems." Caitlin Niedermeyer from OpenAI's Global Affairs team testified that the bill helps "move toward clearer, more consistent national standards."

Critics aren't buying it.

Scott Wisor, policy director for the Secure AI project, told WIRED that 90% of Illinois residents oppose liability exemptions for AI companies. "There's no reason existing AI companies should be facing reduced liability," he said. Illinois has a track record of aggressive tech regulation. It became the first state to limit AI use in mental health services last August. Wisor thinks the bill has a slim chance of passing.

More powerful models keep arriving. Anthropic just released Claude Mythos. The question of who pays when AI causes catastrophic harm remains unanswered. Families who lost children to suicide after interactions with ChatGPT have already sued OpenAI. Congress hasn't passed federal AI legislation. If this bill passes, it sets a precedent: AI labs can build tools capable of mass harm and face no legal consequences, provided they publish some safety documents first.