A March 10, 2026 investigation by Josh Dzieza, published jointly by New York Magazine and The Verge, documents how laid-off white-collar professionals — lawyers, scientists, journalists, and content marketers — are entering a precarious gig economy producing training data for the very AI systems that disrupted their careers. The piece follows workers like "Katya," a former journalist who, after finding AI had automated much of her content marketing work, was recruited through LinkedIn by a company called Crossing Hurdles into Mercor's platform, where she was interviewed by an AI named Melvin before being onboarded into a Slack-based assembly line. Her work involved writing prompts, crafting ideal chatbot responses, and building evaluation rubrics — all under strict NDAs that concealed the identity of the end client, referred to only as "the client."

Central to the story is Mercor, a labor marketplace founded in 2023 by three then-19-year-olds from the Bay Area — Brendan Foody, Adarsh Hiremath, and Surya Midha. Originally launched as an AI-mediated hiring platform connecting overseas software engineers with tech companies, Mercor pivoted into AI training data after receiving heavy inbound demand from AI labs. The company now claims approximately 30,000 professionals work on its platform each week, with confirmed clients including OpenAI and Anthropic. By 2025, Mercor had reached a $10 billion valuation, reportedly making its three founders the world's youngest self-made billionaires. Competitors <a href="/news/2026-03-14-gig-workers-training-humanoid-robots-physical-ai">Scale AI and Surge AI occupy similar territory</a>, with Scale claiming over 700,000 credentialed workers and Surge marketing its roster of Supreme Court litigators and McKinsey principals.

The structural instability of this model isn't incidental — it's built in. Katya's first project was canceled without warning two days after she started, leaving her financially stranded, only for a new contract offer to arrive hours later on a Sunday evening with a 45-minute window to accept. The work itself — producing "golden outputs," reasoning traces, and adversarial "stumpers" — draws on genuine domain expertise, but compensation is volatile and project-dependent, with no employment benefits or security. Workers frequently do not know which AI lab they are training, which model, or <a href="/news/2026-03-14-ai-companion-chatbots-hidden-labor">what it will ultimately be used for</a>. The consent question hangs over the whole enterprise unanswered. That dynamic distinguishes this from earlier crowdsourced annotation work like Amazon Mechanical Turk: the workers are former lawyers and PhDs rather than casual click-workers, the tasks require years of professional judgment, and nobody has clear answers about who is accountable for how that judgment gets baked into deployed AI. Dzieza's piece frames this as what one industry veteran called "the largest harvesting of human expertise ever attempted" — a description the 45-minute-window contract offer does little to contradict.