Software engineer Nelson Figueroa published a candid reflection in March 2026 after using Claude Code, Anthropic's agentic coding tool, to submit his first AI-assisted pull request. The contribution added ERB syntax highlighting support to Chroma, the default syntax highlighter used by the Hugo static site generator — a feature Figueroa had wanted for his own blog for years. The PR was approved and merged by Chroma maintainer Alec Thomas, making it a successful open-source contribution by any objective measure. Yet Figueroa describes the experience as leaving him feeling empty, fraudulent, and worse off in terms of impostor syndrome than before he started.
The post names specific voices to explain why AI-assisted output feels psychologically hollow even when it works. Figueroa quotes Xe Iaso — "Whenever I have Claude do something for me, I feel nothing about the results" — and Ori Bernstein's analogy of hiring someone to solve your jigsaw puzzle. He acknowledges a genuine paradox: without Claude Code he likely lacked the cognitive bandwidth to navigate an unfamiliar codebase and produce the contribution at all, yet that dependency compounds rather than resolves the discomfort. He also notes that AI-assisted velocity is now an explicit factor in his workplace performance reviews — which suggests the emotional friction will intensify as <a href="/news/2026-03-14-emacs-vim-ai-terminal-native-advantage">agentic tooling moves from optional to expected</a>.
Hacker News commenters largely challenged Figueroa's framing. The top responses argued that understanding the problem domain, directing the tool, validating output, and shepherding a contribution through maintainer review constitutes legitimate engineering work — the AI is one component in a human-directed process, not a replacement for it. Historical parallels were drawn to the collapse of specialized DBA roles through the 2000s as ORMs and storage economics automated schema work, with the argument that craftsmanship narratives have consistently lagged behind tooling shifts without the ecosystem collapsing. A third commenter noted the irony that the experience mirrors how managers feel — facilitating outputs through others — which society has long considered valid professional contribution.
The maintainer side of the story is equally telling. Alec Thomas approved and merged the PR without apparent friction or special scrutiny, treating it as a routine contribution. For a new syntax lexer, the review checklist is the same regardless of how the code was written: does it work, does it follow project conventions, does it pass CI? On that basis, where the code came from doesn't much matter. That's not the case everywhere. High-stakes projects — security-critical libraries in particular — have started requiring AI disclosure or banning AI-assisted contributions outright. Lower-surface-area projects have mostly gone the other way, accepting or even welcoming them for reducing friction on tedious tasks. The concern that surfaces in maintainer discussions isn't the individual quality-controlled AI PR — it's volume: a flood of low-effort AI-generated issues and PRs that inflate triage burden even when they're rejected quickly. Figueroa's self-aware framing — calling his own submission "slop" and thanking Thomas for dealing with it — reflects the irony that the contributors most likely to feel fraudulent are precisely those who have done enough human quality control to make the contribution legitimate.