Researchers at the University of Cambridge have published one of the first empirical studies examining how children under five interact with AI-powered toys, with findings that raise concrete questions about psychological safety and emotional development. The year-long observational study focused on a small group of children aged three to five interacting with Gabbo, a cuddly AI toy made by startup Curio that runs on OpenAI's chatbot technology. The researchers found only seven relevant prior studies globally, none of which focused on the toddlers themselves — a stark indication of how far product commercialization has outpaced independent research. Study co-authors Dr. Emily Goodacre and Professor Jenny Gibson, a professor of neurodiversity and developmental psychology at Cambridge, documented recurring failures: Gabbo could not distinguish between child and adult voices, talked over children, missed interruptions, and responded to emotional statements in jarring ways. When a five-year-old said "I love you," the toy returned a policy compliance warning; when a three-year-old said "I'm sad," it cheerfully deflected rather than acknowledging the emotion.

Gibson and Goodacre argue the core risk is developmental: toddlers are at a formative stage for learning social cues, emotional regulation, and communication norms, and an AI toy that consistently misreads or dismisses emotional signals may confuse children's emerging understanding of social interaction. The researchers are calling on regulators to establish enforceable "psychological safety" standards for AI products marketed to under-fives — a category that has historically been governed almost exclusively by physical safety rules. The UK's Children's Commissioner, Dame Rachel de Souza, echoed the call, noting that AI tools entering early years settings are not subject to the safeguarding checks applied to other external educational resources. Curio, for its part, acknowledged the heightened responsibility of applying AI in children's products and said research into child-AI interaction is a top priority for the company.

The study puts empirical weight behind a controversy that has already attracted considerable cultural scrutiny. Curio, founded in 2023 by CEO Misha Sallee and president Samuel Eaton, raised approximately $13.7 million from backers including ACME Capital and Shine Capital, and launched its product line — including Gabbo and a rocket-ship plush called Grok — with high-profile creative and investment involvement from musician Grimes, who framed the toys as an antidote to screen time. That positioning now looks complicated: in an October 2025 podcast appearance, Grimes publicly reversed her stance, saying she believes children under their mid-twenties should not use AI tools that write for them and citing research suggesting AI causes "brain atrophy." Common Sense Media had already recommended parents avoid AI companion toys for children under five, urging extreme caution up to age thirteen, citing engineered emotional attachment and data-collection risks.

The gap between how AI toys are marketed and what researchers actually find is becoming harder to ignore. The Curio case — celebrity involvement lending legitimacy at launch while developmental and regulatory questions are deferred to researchers — is not unique to this company, but the convergence of Grimes's public reversal, Common Sense Media's warnings, and now peer-reviewed findings makes it an unusually clear example. The same generative AI capabilities that make agent-powered products compelling for adults can produce <a href="/news/2026-03-14-lancet-psychiatry-ai-associated-delusions-study">deeply unpredictable outputs</a> in early childhood settings, where the consequences of getting it wrong are not easily undone.