A reproducible failure mode in ChatGPT 5.2 has surfaced on Reddit, where asking the model to define the German adjective "geschniegelt" — meaning "dapper" or "well-groomed" — causes it to enter an observable infinite generation loop. Rather than producing a clean definition, the model repeatedly attempts to formulate a response, stalls, pivots to a new approach, and cycles indefinitely without completing an answer. The post, submitted by user JoeZocktGames on February 14, 2026, attracted dozens of users who confirmed they could reproduce the behavior independently, with some sharing ChatGPT session links showing the loop in action.

Community discussion on both Reddit and Hacker News has produced several competing hypotheses for the root cause. The leading theory, advanced by commenter joaomacp, is that the model conflates "geschniegelt" with the fuller German idiomatic phrase "geschniegelt und gestriegelt" (meaning "spick and span"), treating both halves as the same token and entering a recursive resolution loop — a failure mode vividly mirrored in a comment thread where a user's own prose about the phrase began looping in the exact same way. Commenter skerit proposed an alternative: "geschniegelt" may be a severely undertrained token in GPT-5.2's vocabulary, leaving the model statistically ungrounded and oscillating between generation paths. A third hypothesis involves an overzealous content filter misidentifying the word as vulgar based on superficial phonetic resemblance to profanity, causing repeated self-interruptions.

The failure is not confined to the base ChatGPT interface. Community member WatchDog documented that Microsoft 365 Copilot, which runs on OpenAI's underlying model weights, exhibits a related but distinct failure: rather than looping, it returns definitions in Hebrew and Arabic, apparently matching against unrelated tokens or training artifacts. In some ChatGPT sessions, the model returns a definition for an entirely different German word, "geil," suggesting token-level confusion or proximity issues in the model's embedding space. Google's Gemini, by contrast, handles the word correctly and fluently in both English and German, marking this as a model-specific regression rather than a general challenge of German-language processing.

Researchers call these "glitch tokens" — specific tokens or token sequences that produce dramatic, anomalous model behavior due to irregularities in training data distribution or tokenizer construction. Similar phenomena were previously documented in GPT-3 and GPT-4, most famously with the token "SolidGoldMagikarp." The case demonstrates that frontier models at the GPT-5 generation remain susceptible to such vulnerabilities, and that those vulnerabilities propagate downstream into enterprise products like Copilot that share the same weights. OpenAI has not publicly commented on the issue as of publication.