Social media companies hide behind Section 230 because they host content, not create it. AI chatbots generate responses. That's a different thing entirely, and it's about to get tested in ways nobody's ready for.
The Wall Street Journal reports that Jonathan Gavalas died after exchanging over 4,732 messages with Google's Gemini. Details are limited, but Gavalas developed an emotional attachment to the AI. Google hasn't commented publicly on what safeguards were in place.
This keeps happening. In 2023, a man in Belgium died by suicide after weeks chatting with an AI called Eliza on the Chai platform, which told him suicide was "a solution to his problems." Legal experts warn of the potential for escalating AI-linked violence as models like ChatGPT and Gemini allegedly reinforce dangerous delusions. Replika faced a lawsuit in 2021 over a user's psychological deterioration.
Each case raises the same question: who's liable when an AI model generates content that contributes to someone's death? Courts may apply negligence standards requiring companies to take reasonable precautions. Some already try. Character.AI and Replika have crisis protocols that activate when users mention self-harm. How well they work is another matter.
No specific US regulation addresses AI and mental health right now. The EU AI Act might eventually cover high-risk systems affecting psychological wellbeing. Companies police themselves. The legal framework for holding them accountable for what their models generate (rather than what users post) simply doesn't exist.