Roman Hoffmann is a software developer, not a researcher, which makes his dissection of AI coding psychology more credible, not less. He's writing from inside the loop.
His piece on codn.dev opens with a simple observation: when you prompt Claude, Copilot, or Cursor and get working code back in ten seconds, your brain doesn't file that under 'productivity tool.' It files it under 'this worked, do it again.' That compression — from a write-debug-fix cycle that used to take hours down to a single exchange — is where Hoffmann locates the problem.
The reinforcement mechanism he describes is a variable-ratio reward schedule, the same structure that makes slot machines effective. The key word is variable. If every prompt produced perfect code, the loop would be satisfying but not compulsive. It's the inconsistency — sometimes brilliant, sometimes broken, always worth one more try — that keeps sessions running longer than intended.
Hoffmann identifies several compounding effects. The 'empowerment spike' hits hardest for newcomers: shipping a working prototype without years of training is, by his account, a reward potent enough to permanently anchor the pattern. More interesting is what happens after the session ends. Unfinished problems stay cognitively active — the Zeigarnik effect, well documented in psychology, predicts that people ruminate more on incomplete tasks than resolved ones. In vibe coding, Hoffmann argues, this produces what he calls the 'mental prompt carousel': developers involuntarily rehearsing hypothetical prompts in the shower, at dinner, at 3am. Some report waking early specifically to get back to the editor.
Anyone tracking how AI coding tools are embedding themselves in professional workflows should sit with this framing. Cursor, GitHub Copilot, and Claude Code have mostly been covered as productivity stories — time saved, lines generated, developers unblocked. Hoffmann's angle is different. These are behavioral systems. Their engagement dynamics have more in common with habit-formation research than with traditional tool adoption. The stickiness isn't just a feature; it may be an artifact of the same psychological conditions that make the tools hard to put down.
The downstream risk he's most concerned about is 'fragile confidence' — the tendency to trust output that looks plausible without verifying it. This isn't a character flaw; it's a predictable consequence of a workflow that rewards speed and punishes friction. When the reinforcement loop runs fast enough, the instinct to slow down and audit gets crowded out, with real implications for code quality and security.
His prescriptions won't win friends in an industry obsessed with reducing friction: timebox your sessions, ask models to explain their reasoning rather than just produce output, and treat AI-generated code the way you'd treat a pull request from someone you haven't worked with before. The piece includes a candid admission that it was itself written through the iterative prompting workflow it critiques. That's not a contradiction. It's a reasonably honest illustration of the point.