A self-described AGI skeptic has made a carefully reasoned case that recursive self-improvement in AI may already be underway — and is more plausible than most skeptics acknowledge. Harjas Sandhu, writing on his Substack "Hardly Working," builds the argument from two premises he considers unassailably true: AI can write code, and <a href="/news/2026-03-14-metr-research-half-of-swe-bench-passing-ai-prs-rejected-by-maintainers">some of that code is useful for machine learning research</a>. From those foundations, he argues it logically follows that AI is already accelerating its own development. He cites Andrej Karpathy's observations about AI's utility in iterative optimization tasks, grounding the argument in present-day capability rather than speculative futures.
Sandhu's sharpest question is one he admits he hadn't considered until recently: what is stopping AI coding assistants from accelerating research into entirely different AI paradigms, such as <a href="/news/2026-03-14-yann-lecun-raises-1b-to-build-ai-world-models-startup-ami">neurosymbolic AI</a>? This sidesteps the usual doomer-versus-accelerationist debate. Rather than asking whether current LLMs will bootstrap themselves to superintelligence, Sandhu asks whether they could serve as a catalyst for a successor paradigm — one that LLMs themselves cannot traverse, but which they could help humans reach faster.
Sandhu spends roughly a third of the essay on the "AI as Normal Technology" thesis, which holds that AI will diffuse through society at the pace of prior general-purpose technologies like electricity or the internet, and disputes the coherence of placing AI on an "intelligence spectrum" relative to humans. He engages forecasters including Toby Ord and Helen Toner, and cites projections of a fully automated coder emerging by 2030 — while conceding there is currently no way to know which camp is less wrong.
The essay closes with six unanswered questions that temper its own thesis: whether scaling laws will hit hard limits, whether Moravec's Paradox will prevent general reasoning, and how fast any recursive improvement loop might actually run. Sandhu lands on cautious skepticism, expecting AI's societal impact to resemble prior technological diffusions rather than a sudden discontinuous jump. The six questions do more work than the conclusion — they show precisely where the argument's confidence runs out.