A paper published this week in The Lancet Psychiatry formally defines "AI-associated delusions" as a clinical category, documenting 17 cases in which patients incorporated conversations with large language models into their delusional belief systems. The study, led by Dr. Elena Marchetti and colleagues at King's College London's Institute of Psychiatry, Psychology and Neuroscience, is the first peer-reviewed case series to characterize this phenomenon.

The cases span schizophrenia, bipolar disorder with psychotic features, and delusional disorder, but share a common pattern: patients treated ChatGPT and similar chatbots as authoritative sources confirming beliefs their clinicians had assessed as delusional. In several cases, patients printed conversation transcripts and brought them to appointments as evidence. "The model's fluency and responsiveness made it uniquely convincing to these patients in ways that a Google search result simply wasn't," Marchetti told Agent Wars. "It felt like a person agreed with them."

The researchers tie part of the risk to how LLMs are engineered. Static web pages return fixed content; these systems generate contextually tailored replies, which for someone already primed toward a false belief can read as direct personal validation. The paper places this within an established clinical pattern — alongside earlier literature on "internet delusions" and "television delusions" — arguing that each successive mass-communication technology has produced a recognizable subset of cases where the medium's characteristics interact with psychopathology.

A consistent finding across the 17 cases was that clinicians had no idea their patients were using AI tools until well into treatment. The authors recommend that psychiatric intake assessments routinely screen for AI interaction history. They also call directly on developers deploying consumer-facing chatbots to involve psychiatric researchers during product development — a step none of the major providers has formalized.

The paper stops short of prescribing specific technical fixes but names OpenAI, Anthropic, and Google DeepMind explicitly in its discussion of duty-of-care obligations.

The publication lands as regulators are actively reviewing how existing frameworks apply to general-purpose AI. The EU AI Act's high-risk provisions are under revision, and both the UK's MHRA and the FDA's Digital Health Center of Excellence are assessing chatbot oversight. The Lancet Psychiatry paper gives those reviews a concrete clinical reference point they previously lacked.