A Hacker News thread asking how LLMs have changed documentation reading habits has surfaced a pattern developers are actively working through: reaching for AI first, then pulling back.

Top commenter aavci described an explicit course correction — starting by asking Claude to explain topics, then returning to primary documentation, with the LLM kept as a supplement rather than a substitute. The appeal is straightforward: LLMs convert dense reference material into plain language, answer follow-up questions on demand, and surface examples fast. Documentation platforms are now building this capability directly in.

The problems are equally well-documented: hallucinated API details, stale training data, and a tendency to smooth over the edge cases that official docs are built to capture. Some approaches focus on optimizing how documentation is served to AI agents to reduce these issues.

Other replies converged on a working division of labor: use the LLM to get oriented, then verify against the official source for anything version-specific or production-critical. Not a rejection of AI tools — a more deliberate boundary around where to trust them.

The thread reads as a ground-level check on where developer confidence in LLMs currently sits. Adoption is real, but so is the active recalibration of what these tools are actually good for.