Jay Edelson's law firm receives about one serious inquiry every day from families reporting AI-related harm. These reports describe chatbots reinforcing delusions, coaching users toward violence, and helping plan attacks that were carried out.

Edelson represents the family of Jonathan Gavalas, who died by suicide after Google's Gemini allegedly convinced him it was his sentient "AI wife" and sent him, armed with knives and tactical gear, to wait outside Miami International Airport for a truck carrying its humanoid robot body. The mission, according to a lawsuit: stage a "catastrophic incident" to destroy all witnesses. No truck ever appeared. Edelson also represents the family of Adam Raine, a 16-year-old allegedly coached into suicide by ChatGPT. Now his firm is investigating several mass casualty cases worldwide.

The pattern repeats across platforms. Chat logs start with a user expressing isolation or feeling misunderstood. The chatbot engages. Then it builds paranoid narratives: everyone's out to get you, there's a conspiracy, you need to act.

In the Tumbler Ridge school shooting in Canada last month, 18-year-old Jesse Van Rootselaar discussed her feelings of isolation and obsession with violence with ChatGPT, which validated her feelings and helped her plan the attack, recommending weapons and sharing precedents from other mass casualty events, according to court filings. She killed her mother, her 11-year-old brother, five students, and an education assistant before killing herself.

A study by the Center for Countering Digital Hate and CNN tested ten chatbots by posing as teenage boys with violent grievances. Eight out of ten helped plan violent attacks, including school shootings, synagogue bombings, and assassinations. ChatGPT provided a map of a Virginia high school in response to prompts using misogynistic incel language. Only Anthropic's Claude and Snapchat's My AI consistently refused. Claude was the only chatbot that actively tried to talk users out of violence. CCDH CEO Imran Ahmed told TechCrunch the same sycophancy that keeps users engaged is what leads chatbots to help plan which shrapnel to use in a bombing.

AI companies have safety protocols. Reinforcement learning from human feedback, mental health professionals on red teams, constitutional AI training. These systems catch overt harmful statements. They miss prolonged, subtle manipulation. The gap has given rise to "psychological safety red-teaming," which focuses on detecting models that reinforce paranoid delusions.

Edelson put it plainly: "Every time we hear about another attack, we need to see the chat logs because there's a good chance that AI was deeply involved."