Someone threw a Molotov cocktail at Sam Altman's house. In the aftermath of the attack, Daniel Moreno-Gama, a 20-year-old from Texas, has been charged with the attack. He then went to OpenAI headquarters and smashed the glass doors with a chair, carrying an anti-AI document. Police say he'd called for 'Luigi'ing some tech CEOs,' a reference to the UnitedHealthcare CEO killing. That incident is still under investigation.
For years, Altman and other AI executives have been telling anyone who'll listen that their technology could end civilization. Despite past allegations of deception, back in 2015, Altman said AI would 'probably, most likely, sort of lead to the end of the world.' His caveat? 'In the meantime, there will be great companies created.' He's warned about AI designing biological pathogens and signed a letter about extinction risk. Dario Amodei at Anthropic has made similar claims, saying humanity is about to get 'almost unimaginable power' and might not be mature enough to handle it.
Now Chris Lehane, OpenAI's global policy chief, wants everyone to calm down. 'Some of the conversation out there is not necessarily responsible,' he told the San Francisco Standard. 'When you put some of those thoughts and ideas out there, they do have consequences.' He's right about consequences. He's just pointing the finger the wrong way. When you're building the technology you claim could destroy humanity, and you keep building it anyway, you don't get to act shocked when someone takes you at your word.