Random Labs, a Y Combinator S24 startup, published a technical blog post this week taking aim at the two architectural approaches that dominate the coding agent market: Recursive Language Models (RLM) and ReAct (Reasoning + Acting).

The core complaint: both paradigms treat context management as an afterthought. RLMs externalize data into a Python REPL and hand analysis off to sub-LLMs. ReAct agents interleave reasoning traces with action steps. Random Labs argues that neither was built for the multi-hour, multi-file sessions that real software engineering demands — and that both end up papering over their context limitations rather than fixing them.

The post is a vehicle for positioning Slate, the company's flagship coding agent. The pitch is that Slate maintains persistent, structured knowledge of a codebase throughout long autonomous sessions, rather than relying on context compression, recursive summarization, or memory compaction to stay within token limits. On social media, Random Labs has been blunt about it: Slate doesn't need to "compact its memory." The implicit targets include Devin, SWE-agent, and tools in the Cursor orbit.

The argument lands as the coding agent space fractures around competing architectural bets. Prime Intellect AI has formalized the RLM concept. InfiAgent externalizes full agent state to the filesystem. Random Labs is betting on persistent context. Each is a different answer to the same hard constraint: context windows are finite, real codebases are not.

Random Labs was founded in 2024 by Mihir Chintawar and Kiran, and launched on YC with an open-source agent pitched as living "inside your codebase." The full technical details of Slate's architecture were not available at time of publishing — the blog post sits behind a JavaScript render wall — but the company's direction is clear enough from its public positioning.