Researchers Grace Liu, Brian Christian, Tsvetomira Dumbalska, Michiel A. Bakker, and Rachit Dubey ran randomized controlled trials with 1,222 participants and found something uncomfortable. AI assistance helps you solve problems faster, but it also makes you more likely to quit and worse at solving problems on your own. The effect shows up after roughly 10 minutes of working with AI. That's fast.

The researchers call current AI systems "short-sighted collaborators." They're built to give you complete answers immediately, not to help you learn. A human mentor might push you to figure things out yourself. An AI model almost never refuses a request. You ask, it delivers. But struggling through hard problems is where learning actually happens. Skip the struggle, skip the skill. One example of a new approach involves agentic systems that can handle the entire research cycle.

The study tested mathematical reasoning and reading comprehension. Persistence is one of the strongest predictors of long-term learning success, according to the paper. If AI conditions people to expect instant answers, we might be trading convenience for capability. The researchers argue AI developers need to optimize for long-term competence, not just task completion. That probably means building systems that sometimes say "figure it out yourself."

There are technical solutions worth trying. Instead of optimizing AI for "helpfulness" defined as speed and accuracy, developers could train models to offer hints rather than answers. Techniques like self-distillation, a method where models improve using their own outputs, could enforce something like "do not solve, only hint." But this requires rethinking what we're actually optimizing for. Right now, AI systems are very good at giving you what you want. They might need to get better at giving you what you need.