Rival, an AI research and benchmarking company, just published a study fingerprinting 178 AI models to see how their writing styles compare. They used 32 dimensions to analyze model outputs and found some awkward similarities. Models from completely different providers showed more than 75% writing similarity. The standout example? Gemini 2.5 Flash Lite Preview 06-17 and Claude 3 Opus scored 78.2% similarity despite coming from different companies and carrying very different price tags.

That's a weird result. Google's Gemini and Anthropic's Claude aren't supposed to be related. When their writing patterns cluster this tightly, you have to wonder whether these models share training data, involve distillation, or have some other connection the providers aren't disclosing.

The Hacker News community pushed back on some of the implications. User jefftk pointed out that writing style similarity doesn't mean the models are interchangeable. A model's usefulness depends on how well it understands what you need, not just how it strings words together. Fair point. Two writers with similar styles can have very different capabilities.

But the broader question matters. If companies are training models on each other's outputs, or if the industry is converging on similar approaches, that affects everyone buying these models. You might be paying premium prices for something that behaves a lot like a cheaper alternative. Rival's methodology could help buyers make smarter decisions, and the big model providers probably already run similar analyses internally.