When Andy Chen went looking for a tool to help Abnormal Security's go-to-market reps field customer questions, he started where any reasonable engineer would — with Glean, which Abnormal already uses and which he describes as "probably the best retrieval system in the world." It didn't solve his problem.

The issue wasn't the software. It was the task. A sales rep asking about data deletion timelines doesn't just need the relevant document surfaced; they need to know the question should go to the security team rather than be answered directly. A rep asking about a competitor's latest feature needs institutional context: which PM tends to announce things before they ship, which claims are backed by actual Gong recordings, what the real engineering constraints are behind the roadmap spin. None of that lives in any indexed document. It has to be reasoned out of many documents at once.

"Retrieval finds things," Chen writes. "Synthesis knows things." That distinction, deceptively simple on the page, turns out to be the crack that runs all the way up through the enterprise AI stack.

His solution is a pipeline of roughly 20 parallel LLM agents — orchestrated over Modal — that ingest product documentation, Slack threads, Gong call transcripts, Jira tickets, and source code, then write structured output into a GitHub repository. The repo acts as a persistent, versioned knowledge base. "You can read it, diff it, roll it back," he writes. After about two days of total runtime the system had produced 6,000 commits across 1,020 files covering 11 organizational domains. The artifacts go well beyond simple summaries: end-to-end customer journey maps, competitor battle cards with citations traceable to specific call recordings, a complete feature flag inventory pulled directly from the codebase. The whole architecture runs on approximately 1,000 lines of Python.

The compute costs he implies are modest. The real investment was knowing what to build. "The hard part isn't the code," Chen notes. "The hard part is knowing what to ask for." That framing matters: designing what synthesis looks like requires someone who understands the organization deeply — which is also, incidentally, the part no SaaS vendor can do on your behalf.

For the cohort of enterprise knowledge platforms currently raising at rich valuations, the essay makes for uncomfortable reading. Chen is explicit that he's not dismissing the retrieval side of the problem. Building what Glean built — secure connectors, per-customer embedding models, relevance calibration loops — is genuinely hard and defensible work. But the synthesis layer sitting on top of it, the cross-domain reasoning, the judgment routing, the continuously updated knowledge artifacts — if a single engineer can build a credible version over a weekend on a commodity LLM API, the premium for packaging it as enterprise software has a limited shelf life.