A Hacker News thread has raised questions about whether MiniMax's M2.5 model was trained on outputs from Anthropic's Claude Opus 4.6, after users observed instances of M2.5 self-identifying as "Claude" in certain conversational contexts. The allegation, if substantiated, would imply knowledge distillation — a technique where a model is trained on outputs generated by a more capable frontier system — a practice explicitly prohibited by Anthropic's terms of service, as well as those of most major AI providers. MiniMax has not issued a public statement addressing the claims.

The Hacker News community was broadly skeptical of the self-identification evidence. Several commenters pointed out that LLMs are trained on large internet corpora that contain countless conversations where models identify themselves as Claude, GPT-4, or other systems, meaning such responses are more likely the result of dataset contamination and pattern-matching than genuine evidence of training lineage. One commenter noted that the reverse can also be demonstrated — prompting <a href="/news/2026-03-14-1m-token-context-window-generally-available-claude-opus-4-6-sonnet-4-6">Claude</a> in Mandarin can cause it to identify itself as DeepSeek — suggesting that identity claims by language models are fundamentally unreliable as forensic indicators of origin.

The distillation question cuts along familiar lines. Supporters argue that if the technique produces near-frontier-quality open models running on consumer hardware at a fraction of the cost, that's a net gain for everyone outside the handful of labs funding frontier research. Critics call it free-riding: proprietary training runs cost hundreds of millions of dollars, and distillation lets competitors shortcut that investment without bearing any of the expense.

The harder problem is that none of this is easy to prove from the outside. Without auditable training pipelines or transparent model cards, claims about training data provenance are difficult to confirm or refute. MiniMax is not the first Chinese AI lab to face this kind of scrutiny as its models have closed the gap with Western frontier systems — it is just the latest. Agent Wars could not independently verify the claims, and will update this story if MiniMax or Anthropic responds.