OpenAI wants legal immunity if its AI helps kill a hundred people. The company testified in favor of Illinois bill SB 3444, which would shield frontier AI developers from liability for critical harms caused by their models. We're talking about AI that helps create chemical or biological weapons. Or AI that does things that would be criminal if a human did them, leading to mass death or at least $1 billion in property damage. As long as the developer didn't intentionally cause the harm and published safety reports on their website, they'd be protected from lawsuits. The bill defines frontier models as any AI trained with more than $100 million in computational costs. That covers OpenAI, Google, Anthropic, xAI, and Meta. None of those competitors have publicly endorsed the legislation. OpenAI spokesperson Jamie Radice said the company supports the approach because it "focuses on what matters most: reducing the risk of serious harm from the most advanced AI systems." And Caitlin Niedermeyer from OpenAI's Global Affairs team testified that the bill helps avoid "a patchwork of inconsistent state requirements that could create friction without meaningfully improving safety." But the bill faces steep opposition. Scott Wisor, policy director for the Secure AI project, told WIRED the bill has a slim chance of passing. His polling found that 90 percent of Illinois residents oppose exempting AI companies from liability. Illinois has a track record of aggressive tech regulation. Last August it became the first state to limit AI use in mental health services. State lawmakers have also introduced competing bills that would increase developer liability. OpenAI used to play defense, opposing bills that could make labs liable for harms. Now it's pushing for explicit legal protection before any catastrophic AI event happens. The federal framework the company says it wants remains stalled in Congress. So OpenAI is trying to set precedent at the state level, backing a bill that 9 out of 10 Illinois residents oppose.
OpenAI Wants Immunity If Its AI Helps Kill a Hundred People
OpenAI is backing Illinois bill SB 3444, which would shield AI developers from lawsuits when their models cause mass death of 100+ people or at least $1 billion in property damage. Developers get protection as long as they didn't intentionally cause harm and published safety reports.