Kaushik Ghose, a software developer with a background in computational neuroscience, published a detailed personal retrospective on March 14, 2026, covering roughly a year of generative AI use in professional and personal projects. Writing at kghose.github.io, Ghose frames the whole enterprise simply: software development is "plumbing, not philosophy," and AI helps developers manage the bigger picture without having to master every niche technical detail.

He identifies several areas where he finds genuine value. AI-augmented search — what he calls "search++" — has replaced cold-calling colleagues and wading through documentation. Autocomplete, once exciting for completing ten lines of Python, is now "quaintly obsolete" in 2026, superseded by <a href="/news/2026-03-15-developer-builds-cutlet-language-with-claude-code-without-reading-code">agentic coding tools capable of generating entire multi-file projects</a>. Automated code review has caught real bugs his colleagues say they would have missed. Analysis code generation using Pandas and Matplotlib via brief natural-language directives has become routine. Test case generation, with its heavy boilerplate requirements, is another task he happily delegates. Debugging assistance is improving, he notes, but remains most effective when the problem has well-documented answers in public forums.

A recurring friction point is what Ghose calls the "80/20 problem": an initial prompt can get AI output 80 percent of the way to correct, but the system then settles into what he describes using optimization-landscape language as "a weird local minimum," resisting further improvement through iterative prompting. Agentic tools have gotten meaningfully better at incorporating feedback by early 2026, he acknowledges, but the diminishing-returns dynamic in the iterative prompting loop remains a real cost-benefit consideration for developers deciding where to invest effort.

Ghose draws a hard line at writing. He refuses to use generative AI for design documents or personal writing — not as a general principle about AI, but because of what those tasks actually are for him. "The writing is actually to organize my thoughts, often to actually think and to think together with other people," he writes. The distinction he makes is between mechanical or boilerplate tasks, which he's comfortable delegating, and <a href="/news/2026-03-14-nyt-ai-coding-assistants-end-of-programming-jobs">tasks where the act of producing the output is itself the cognitive work</a>. Outsourcing the writing, in his framing, would mean outsourcing the thinking — which defeats the purpose.