Intent Solved, an Australia-registered strategic AI advisory firm, published an analysis in March 2026 coining the term "Shadow Dev Problem" to describe a capability fracture forming inside software engineering teams as autonomous AI coding agents like Anthropic's Claude Code see uneven adoption. The core argument, authored by Intent Solved director Steven Muir-McCarey, is that tools capable of writing production code and designing systems represent a categorically different kind of organizational risk than earlier "Shadow IT" or "Shadow AI" concerns. A developer using Claude Code effectively, the piece contends, can hold more context, execute more complex changes simultaneously, and maintain codebase consistency in ways that were practically impossible for a single individual just over a year ago — leaving colleagues still working in manual patterns operating at a fundamentally different output level.

The analysis identifies two organizational responses that have become common but that Intent Solved characterizes as non-strategies. Outright bans, the firm argues, simply drive usage to personal devices and home networks, trading visible risk for invisible risk. Unstructured free-for-all adoption, meanwhile, produces wildly inconsistent code quality and exposes systems to the security implications of autonomous agents operating without guardrails. The piece advocates instead for what it calls "pouring the slab" — treating AI coding agent adoption as a foundational architectural decision that requires deliberate environment configuration, <a href="/news/2026-03-14-in-repo-docs-ai-agents-dein">standards that are codified and version-controlled alongside the codebase itself</a>, and shared capability-building across the team rather than dependence on individual power users.

Beyond productivity and security, the article frames the Shadow Dev Problem primarily as a threat to institutional knowledge and team cohesion. Code review degrades when authors and reviewers are working at different capability levels; onboarding into codebases built under individual AI-assisted workflows becomes difficult when no standards were recorded; and organizational knowledge risks becoming trapped in individual developer habits rather than accumulating as shared team practice. When those developers leave, that knowledge exits with them. The framing positions the problem as something that compounds quietly over time, making it harder to detect until the fracture is already significant.

Intent Solved's commercial positioning deserves scrutiny for readers tracking the advisory landscape around AI agent adoption. The firm describes itself as "Engineering AI, Not Consulting," explicitly differentiating from large traditional consulting houses by emphasizing execution over advice. The "Shadow Dev Problem" article itself routes to two conversion points — a free "Signal Audit" assessment and a discovery call — and Intent Solved states its specialization as "hardening engineering teams through structured Claude Code implementation." The firm has effectively made a calculated bet on Claude Code as the dominant enterprise agentic coding platform, meaning its commercial trajectory is directly correlated with Anthropic's penetration of the enterprise engineering market. Coining a novel term for the organizational risk is simultaneously a thought leadership play and a category-creation strategy aimed at owning the framing before larger consultancies do.