A piece published at background-agents.com — author and institutional affiliation unlisted — frames the current moment in AI coding tools as a shift from on-demand assistants to persistent agents running as daemon-like processes alongside human development work. The argument: tools like Devin and its competitors are converging on a model where refactoring, test coverage, security patching, and deployment triggers are handled asynchronously, without explicit human prompting. The piece calls this the <a href="/news/2026-03-14-8-levels-agentic-engineering-framework">"self-driving codebase" paradigm</a> and positions it as the next competitive frontier beyond chat-based copilots like GitHub Copilot.
The technical framing is plausible. The legal one is a problem.
IBM has documented what it calls an "accountability gap" in autonomous agent deployments, citing four specific failures: agents with standing access that rarely expires; invisible delegation where agents reuse human authentication tokens; absent enforcement at action points; and zero post-incident accountability. For background agents operating as continuous processes in production codebases, these failures compound. A misconfigured merge or erroneous dependency push can produce a forensic chain that is genuinely difficult to reconstruct.
The contracts aren't ready either. A February 2026 analysis by Clifford Chance found that virtually all enterprise AI vendor agreements were drafted for passive software tools, leaving gaps around autonomous decision liability, damage exclusions for production incidents, and transparency provisions. Lathrop GPM's review of enterprise contracts reached the same conclusion: "most include only standard disclaimers" with nothing addressing agentic-AI-specific risks.
Courts are starting to weigh in. In Mobley v. Workday, a court found that an AI screening system was "essentially acting in place of the human" and held the vendor directly liable — a ruling that signals judges are willing to look past "tool not agent" defenses when systems operate with sufficient autonomy. (The characterisation of the ruling as establishing direct vendor liability should be verified against the full judgment before relying on it.) The EU AI Act's human oversight requirements, with GPAI obligations active since August 2025, sit in direct tension with agents that explicitly skip per-action human review by design.
The UK is developing a complementary principle: responsibility shifts toward developers and manufacturers as automation reduces human influence. If that framing hardens into enforceable doctrine, it will reshape how autonomous coding tools are scoped, indemnified, and sold.
The background-agents.com piece captures the technical trajectory accurately. What it doesn't resolve — and what vendors in this space haven't resolved either — is who carries the liability when a background agent ships a bad dependency at 3am. Until that question has a contractual answer, enterprise adoption will stay cautious regardless of what the agents can actually do.