Kyle E. Mitchell just published a runnable reference implementation for context engineering. Mitchell is a software engineer and attorney behind the outcomeops GitHub handle. The repo breaks context engineering into five components: Corpus, Retrieval, Injection, Output, and Enforcement. Each folder runs against a Spring PetClinic codebase with Architecture Decision Records (ADRs), using Amazon Bedrock with Claude for generation and Titan for embeddings. Clone it and run it today if you have an AWS account.
Here's the distinction that matters. A system with just Corpus, Retrieval, and Injection is basic RAG. Output and Enforcement are what make it context engineering. Output produces reviewable artifacts shaped by retrieved context. Enforcement checks that generated content actually cites what it relied on.
Basic RAG finds relevant docs. Context engineering produces something traceable.
Most AI coding assistants give you generic output that you then adapt to your team's patterns. Mitchell's approach feeds the model your ADRs, code, and standards at decision time. The output already conforms to how your team works.
The repo shows the pattern end-to-end so teams can build it themselves or evaluate commercial tools making similar claims. Mitchell spent years building developer tooling with legal precision. This is executable documentation from someone who cares whether the output is actually correct.
The harder shift isn't technical. Mitchell notes that roles, KPIs, and decision rights in traditional software orgs assumed AI couldn't read your corpus. Now that it can, those middle layers need rethinking. The repo handles the code side. The org chart is your problem.