Kyle Kingsbury has mapped out six new job categories emerging as organizations struggle to deploy LLMs. Organizations like Wikipedia struggle with autonomous AI agents. Writing on his blog, he identifies "Incanters" (prompt specialists), "Process Engineers" (quality control), "Statistical Engineers" (measuring model variability), "Model Trainers" (feeding human expertise to models), "Meat Shields" (humans who take the fall when ML fails), and "Haruspices" (interpreters of model behavior). The roles range from genuinely technical work to something closer to professional scapegoating.

The Process Engineer role exists because lawyers keep submitting AI confabulations in court. Kingsbury suggests workflows where editors catch intentionally planted errors before documents leave the firm. Statistical Engineers face a different problem: LLMs are chaotic systems that behave differently based on option ordering, language, and input length. A healthcare model might work well in English but fail pathologically in Spanish. You measure that chaos and work around it. The variability is a property to account for, something requiring deep domain-specific effort rather than a bug you can patch.

The most striking insight involves training data contamination. Almira Osmanovic Thunström showed that a handful of fake articles could make Gemini, ChatGPT, and Copilot spread misinformation about an imaginary disease, much like hallucinated citations in scientific papers. Kingsbury proposes using pre-2023 content as "low-background steel," borrowing from the nuclear industry's term for pre-1945 steel prized for its purity. He also suggests companies like OpenAI hire subject-matter experts to train models directly. AI companies are discovering that human expertise becomes more valuable as their models get worse at distinguishing truth from garbage. The irony is thick.

And then there's the Meat Shield role. Companies need humans who can apologize, go to jail, or get fired when AI systems fail. Kingsbury points to the Chicago Sun-Times incident, where freelancer Marco Buscaglia got thrown under the bus for an LLM-generated insert full of nonsense. The editors and managers above him stayed anonymous. Buscaglia was proximate to the LLM, sure. But everyone in that chain contributed to the tomfoolery. The real growth industry in AI isn't the technology itself. It's the human infrastructure required to absorb its failures.