Gary Marcus thinks Claude Code is the biggest AI advance since the LLM, and his reasoning is unexpected. A leaked source file from Anthropic's coding agent reveals it doesn't rely purely on neural networks. Instead, a 3,167-line kernel called print.ts uses classical symbolic AI: IF-THEN conditionals with 486 branch points and 12 levels of nesting. Marcus, who has argued for hybrid "neurosymbolic" systems for 25 years, sees this as vindication. Pure LLMs are too probabilistic. When you need patterns done right, you bring in deterministic code.
The same leak tells a different story. That 3,167-line file is a single function. Other files are worse: QueryEngine.ts runs 46,000 lines, Tool.ts hits 29,000. Anthropic shipped a known bug wasting roughly 250,000 API calls daily. The code uses simple regex for sentiment analysis. Hacker News commenters were unimpressed. One compared Marcus's argument to claiming human muscles "won" because cars require physical operation. Another asked whether the code represents deliberate neurosymbolic design or just badly structured AI output that happened to use conditionals.
The deeper tension is real. Anthropic's claims about AI-written code in Claude Code escalated from 70% to 100% between September and December 2025, a trajectory critics found suspicious. If the symbolic scaffolding Marcus praises is actually AI-generated code that no human architect would design, we're looking at technical debt with a theory retrofitted onto it. Marcus concedes the symbolic code is "a mess" and says software engineering needs major advances too. An emerging critique suggests this kind of sprawling, uncoordinated code structure mirrors the "Winchester Mystery House" model of software development. Recent assessments also point to quality degradation and reasoning issues, implying the engineering is cracking.