Jonathan Nen's recent essay "Still Choose Boring Technology" makes a strong case that Dan McKinley's decade-old framework for stack selection has gained new force in the AI coding era. McKinley's 2015 thesis, grounded in his firsthand experience introducing and then removing both MongoDB and Scala from Etsy's stack, argued that engineering teams hold a finite number of "innovation tokens" and should reserve them for product differentiation rather than infrastructure novelty. Nen's extension adds a second, concrete dimension: LLM training data. Because large language models are trained on internet-scale corpora, technologies like PostgreSQL, Redis, REST, and React are massively over-represented in model weights, yielding deep and reliable AI competency. Newer or rapidly-changing libraries produce inconsistent, and sometimes dangerously wrong, AI output.

Nen illustrates the argument with two contrasting experiences from his own codebase. When building a rich-text editor with PlateJS — a library that had recently undergone significant breaking API changes — AI assistance became a liability, repeatedly hallucinating outdated patterns and sending him down hours-long rabbit holes. On the same codebase, React Aria components generated by AI worked reliably on the first attempt. The practical consequence was a team decision to consolidate fully on React Aria, not for aesthetic reasons but for AI productivity reasons. This led Nen to his core reframe: exotic technology choices now carry a doubled innovation tax, once for the human team's cognitive overhead, and once for the AI's inability to reason reliably about unfamiliar territory.

Nen also makes a point that gets less attention than it deserves: boring technology does not eliminate the need for developer expertise — it preserves its relevance. AI trained on PostgreSQL has seen everything from basic tutorials to advanced sharding strategies, but lacks contextual discernment about which pattern fits a given situation. Performance anti-patterns such as heavy repeated queries or nested loops are a common AI failure mode. A developer with PostgreSQL experience can spot and correct these mistakes; a developer using an unfamiliar library cannot reliably distinguish an AI error from their own misunderstanding. Boring technology is thus defined not only by API stability but by the team's ability to audit AI output and close the feedback loop on human oversight.

The boring-technology philosophy has achieved a kind of cultural saturation — McKinley's original 2015 HN post has been resubmitted and heavily discussed multiple times, and a dedicated spinoff site, boringtechnology.club, has attracted a loyal following. Simon Willison has noted at least one complicating counterpressure: <a href="/news/2026-03-14-autonoma-rewrites-18-months-of-code-pivots-agentic-qa-platform-away-from-next-js">increasingly capable AI coding agents</a>, given access to documentation, are beginning to perform reliably on novel frameworks, which could erode the training-data advantage for boring tech over time. Even so, Nen's central reframe — that innovation tokens and LLM tokens are now the same decision — gives practitioners something concrete to act on: when evaluating a stack, treat training data volume and API stability as direct predictors of AI competency, not just as rough proxies for community health.