An essay published by Asimov Press on March 23, 2026 makes a structural argument against one of AI's most common ambitions: that scaling current systems will eventually produce paradigm-shifting scientific discoveries. It won't, the author contends, not because of raw capability limits but because of how these systems are built.
The argument draws on Thomas Kuhn's philosophy of science. Systems like DeepMind's AlphaFold, GNoME, and Meta's ESM3 are trained on human-curated datasets with predefined ontologies. That training makes them precise predictors within existing scientific frameworks. It does not make them capable of proposing entirely <a href="/news/2026-03-16-llms-epicycles-intelligence-vardanian">new explanatory schemas</a>, which is what genuine scientific revolutions require.
The essay coins "hypernormal science" for the failure mode: a future where AI accelerates incremental research at scale while the capacity for genuine paradigm shifts atrophies. The author reaches back to Maxwell, Einstein, and Darwin not to gesture at historical prestige but to show that each breakthrough required abandoning the prevailing conceptual framework, not refining it. The clearest version of the argument is Harry Beck's 1933 redesign of the London Underground map. The problem with the old map was never insufficient geographic detail. It needed a different schema entirely. Current AI, the essay argues, is built to add detail. It is not built to invent new schemas.
Hacker News discussion surfaced two complications the essay itself doesn't fully address. One commenter questioned whether the remaining frontiers of scientific understanding are even empirically testable, suggesting the opportunity space for paradigm shifts may be narrower than the essay implies. Another flagged the compounding risk of AI generating plausible-sounding hypotheses beyond its training distribution, making genuine novelty hard to distinguish from sophisticated confabulation.
The essay's constructive case is its sharpest point: this is a design choice, not an inevitability. The author calls for <a href="/news/2026-03-14-billion-parameter-theories-llms-complex-systems">visionary machines</a> — systems engineered to devise new conceptual vocabularies rather than optimize within existing ones. Whether anyone is actually building toward that is a question the essay leaves open. Given how much current investment is flowing into AI scientist pipelines that benchmark against existing literature, the answer so far appears to be no.