USC researchers have published a warning shot about what happens when billions of people use the same handful of AI chatbots. In an opinion paper published March 11 in Trends in Cognitive Sciences, a team led by Morteza Dehghani argues that large language models are standardizing human expression and narrowing how we think. 'When these differences are mediated by the same LLMs, their distinct linguistic style, perspective, and reasoning strategies become homogenized,' says Zhivar Sourati, the study's first author and a PhD student at USC Viterbi. The researchers point to studies showing that LLM outputs are less varied than human writing and skew toward Western, educated, industrialized, rich, and democratic perspectives. Think direct, linear arguments and cause-and-effect logic, not circular storytelling or relational reasoning common in other cultures.
The paper frames this as a threat to collective wisdom. Cognitive diversity helps groups solve problems and adapt. When everyone's writing and reasoning starts sounding the same, we lose that edge. LLMs shape how people write and speak. Sourati's deeper worry: these systems subtly redefine what counts as credible speech, correct perspective, or even good reasoning. The Air Force Office of Scientific Research funded the work, which signals that military planners see cognitive standardization as a risk for decision-making and groupthink in command structures.
The researchers want AI developers to incorporate more real-world diversity into training data. That's the ask. But training data isn't the whole problem. The paper notes that people feel less creative ownership over writing when they use chatbots to polish it. Some observers think market dynamics will fix this, that distinctive communication will command a premium. Maybe. Or we might just normalize the sameness. The homogenization is already underway.