Waiting for ChatGPT to finish thinking? Developer ftaip thinks you should be playing games. Their new open-source library, react-waiting-game, drops a tiny arcade cabinet into any loading state with five one-button mini-games built on canvas.
The library ships with zero runtime dependencies and runs on a single prop. Drop in WaitingArcade with game="runner" and users get a dino-style runner while your API call chugs along. There's also a jellyfish swimmer, gravity flipper, space invader, and rhythm tapper, all rendered in 1-bit pixel art that tints to any color. Each game shares combo multipliers, power-ups, screen shake, and an achievement system backed by localStorage. The component auto-pauses when the browser tab loses focus and plays nice with server-side rendering.
On Hacker News, the main ask was for GIF demos to see the games in action. Fair enough. The concept sells itself better in motion.
LLM response times aren't like regular API calls. They're streaming, unpredictable, and can stretch to 30 seconds or more for complex queries. Spinners and skeleton screens were designed for 2-second waits, not 30. In an era where AI costs can exceed human salaries, keeping users engaged while waiting is a smart strategy. It's a band-aid in the sense that faster models would be better, but as band-aids go, this one might stick. The bundle cost is near zero. The real question is whether users want to context-switch into a game or just want their answer. Probably both, depending on what they're waiting for.