Someone gave Claude a pile of casino chips and told it to go wild. The result is DegenAI, a live experiment at letaigamble.com where Anthropic's AI model gambles on its own until the money's gone. You can watch it place the same bets over and over as its bankroll shrinks to nothing.
Hacker News commenters noticed Claude gets stuck in loops during longer sessions. One described watching it repeat "I'll double down to recover my losses" like a broken slot machine. Another suggested that better prompt engineering might produce more varied behavior, even from smaller models. The whole thing is funny in a depressing way. As one commenter quipped, if AGI ever shows up, we might get stuck with "a bunch of degenerate AI agents." Comforting thought.
The bigger problem is that we know almost nothing about how this works under the hood. No developer name or source code anywhere. No explanation of how Claude's API calls connect to the gambling logic. That makes it impossible to tell whether Claude's choices reflect genuine decision-making or just the prompt someone wrote. It's still a useful demo of what happens when you let an AI agent loose with a task and a budget. The AI burns through both without much to show for it.