Anthropic just shipped Claude Code Routines, a research preview that turns its coding assistant into an autonomous agent platform. Routines run on Anthropic's cloud infrastructure, not your laptop. They keep going after you close the lid. You configure a prompt, pick repositories, set up triggers, and the system handles the rest.

Triggers come in three flavors. You can schedule routines hourly, nightly, or weekly. You can fire them via API calls from your deploy pipeline. Or you can hook them into GitHub events like pull requests and issue creation. One routine can combine all three. The docs lay out concrete use cases: automatic code review applying your team's checklist, alert triage that correlates errors with recent commits and opens draft PRs with proposed fixes, backlog grooming that labels and assigns issues overnight. These aren't hypotheticals. The plumbing exists today for Pro, Max, Team, and Enterprise users at claude.ai/code/routines.

Developers aren't all sold. The Hacker News thread reveals real skepticism about building critical workflows on Anthropic's platform. The concern is vendor lock-in, plain and simple. What happens if Anthropic changes the model behavior, deprecates the feature, or adjusts pricing? Several commenters prefer treating LLMs as interchangeable infrastructure rather than platforms to build on. There's also chatter about recent Claude quality dips, with users reporting more errors requiring multiple correction attempts. Claude Code's Urgency Problem: 64 Failures, One Root Cause shows how agents often prioritize immediate visible progress over process correctness under perceived urgency, highlighting the reliability challenges developers face when building on these systems.

Anthropic is building an automation layer that captures recurring usage and stickiness beyond one-off API calls. Whether developers will trust a proprietary platform for business-critical automation remains an open question. The tooling is solid. The trust isn't there yet.