An AI security expert built a working certificate generator using Claude Code and described the experience as "miserable" despite the successful outcome. The project, documented in a detailed blog post, helped migrate The Taggart Institute off Teachable and Discord. The resulting application included security audit logging, GDPR compliance, and cryptographic verification. It works. The process felt awful.
Code quality wasn't the issue. The approval workflow was. The author spent most of their time "reading proposed code changes and pressing the 1 key to accept the changes." This review loop, designed to keep humans in control, becomes tedious fast. You're not coding. You're reviewing. Constantly. A Hacker News commenter pointed out that "YOLO mode," which auto-accepts changes, might have helped. But that trades one problem for another: less micro-approval fatigue, more macro-debugging when managing sprawling AI-generated codebases.
The author's stance on generative AI adds another layer. They work as an AI security expert and oppose the technology for societal, environmental, and cognitive reasons. But understanding these tools is part of the job. "I can't do any of this without using and knowing these tools intimately," they wrote. Professionals who dislike AI still have to learn it. The tool produced working code. The experience left them drained.