A March 2026 essay from Dynomight is making waves in AI development circles, diagnosing a shared pathology between human writers and large language models: a compulsive addiction to bullet points, nested headers, and hierarchical lists at the expense of coherent prose. The piece opens with a deliberately painful demonstration of maximally fragmented formatting — sections within subsections, numbered lists inside bullets, hierarchical taxonomy for its own sake — before contrasting it with a calm, readable paragraph. The author's central puzzle is that writers who produce heavily formatted content, when asked to name writing they admire, consistently point to prose-dominant work, suggesting they are systematically violating their own aesthetic preferences.

The RLHF angle is the most consequential finding for AI agent developers. The essay argues that because human raters in reinforcement learning pipelines associate visual structure with effort and rigour, they have systematically rewarded formatted outputs — creating a feedback loop that encodes a low-quality stylistic habit directly into model weights. The result is that generative models have learned to treat bullet points as a credibility signal rather than a communication tool. The author identifies a related mechanism he calls 'chain-of-thought blathering': structural scaffolding allows models to pad outputs with formatted noise while doing relatively little reasoning work per bullet, and fragmented lists conveniently obscure incoherence since disconnected points never need to logically flow into one another the way paragraphs must.

The essay also surfaces a market-dynamics explanation that applies beyond AI. Formatting functions as a trust shortcut in low-signal environments: readers cannot quickly verify the quality of dense reasoning, so visible structure creates an impression of organisation and credibility. This is why SEO content slop converged on the same aesthetic years before LLMs arrived — it optimises for skimming, not reading. The author draws a parallel to Gresham's law, noting that format-heavy writing lets readers rapidly assess its 'crap level,' while a genuine essay requires significant time investment before its value is knowable.

The essay closes on measured optimism: as model quality improves and users stop needing visual scaffolding as a trust signal, the bullet-point reflex should fade. That may be true eventually. But the same RLHF pipelines that created the habit are still running, and nothing in the current training ecosystem explicitly penalises formatting excess. The raters haven't updated their heuristics. So for now, the gradient still points toward headers.