Researchers at Irregular tested major AI models including Claude Opus 4.6, GPT-5.2, and Gemini 3, finding they generate passwords with predictable patterns that make them vulnerable to attacks. When Claude Opus 4.6 was asked to generate 50 passwords across separate conversations, one password appeared 17 times. The models show clear biases: GPT-3.5 favors the number 47, Claude 3 Haiku prefers 42, Gemini 1.0 Pro leans toward 72. They avoid single digits, favor numbers ending in 7, and rarely repeat digits. An attacker who knows these patterns can build targeted dictionaries that crack LLM-generated passwords far faster than brute force. The bigger worry is what the researchers call 'vibe-password-generation.' Coding agents and vibe-coding tools may be silently generating these weak passwords during development tasks without the developer's knowledge. When you're not reviewing agent actions or the resulting code, these weak credentials slip through. Open-source models like LLaMA and Mistral are even more exposed. Since attackers can analyze model weights directly, they can reverse-engineer password patterns with precision.