The agentic workflow that compresses software delivery for non-technical teams doesn't require next-generation models. It requires the right process, and economist Arnold Kling has a name for it: AI as business analyst.

In a Substack essay published March 13, Kling argues the current dynamic has it backwards. Non-technical users shouldn't have to learn better prompting. Instead, AI should conduct structured domain interviews, extract data models and business requirements, then pass those artifacts to a coding tool to scaffold the application. The user never has to understand what's happening under the hood.

Kling grounds the framing in Information Engineering, a formal enterprise IT methodology developed in the 1980s by James Martin and Clive Finkelstein. IE positioned human "business analysts" as intermediaries between domain experts and developers. Its core thesis — that data structures outlast business processes, and both outlast the people executing them — was widely accepted as correct. The methodology collapsed under its own weight. Business Area Analysis for a large enterprise could reportedly take 12 to 18 months, and the CASE tools meant to automate IE's diagramming workflows were brittle and expensive. Kling's proposal is effectively a second attempt at IE with the costly human intermediary replaced by a model that can conduct an interview in minutes.

Reader response on the post suggests the gap between vision and current capability is narrow. Joseph E., a self-described non-coder at a small SaaS firm, credited Kling's earlier coverage of Claude for his team's adoption of the model. He reported shipping working software in days that previously would have taken a lead developer weeks, and called Anthropic's <a href="/news/2026-03-14-1m-token-context-window-generally-available-claude-opus-4-6-sonnet-4-6">Claude Opus 4.6 and Sonnet 4.6</a> "scary close" to Kling's described workflow.

Joseph also posted Claude's own take on the framing: he fed Kling's essay to the model and shared its reply directly in the comment thread. Claude said the AI-as-business-analyst pattern for structured requirements gathering — producing <a href="/news/2026-03-14-claude-now-builds-interactive-charts-and-diagrams-inline-in-conversations">entity-relationship models, CRUD matrices</a>, role mappings, and user stories — is "real and valuable" with current models. It put a ceiling on the claim: full autonomous development still requires human judgment for edge cases, regulatory requirements such as FERPA compliance, and deployment decisions.

That ceiling is where the workflow question gets concrete. A human in the loop for judgment calls is already part of how Kling envisions the process. What isn't settled is whether AI-generated requirements artifacts are reliable enough to pass directly to a code scaffold, or whether a human needs to review them before the build starts. Teams attempting the pattern at scale will produce real data on that question fast, and that data will matter more than any further theoretical argument about what current models can do.