Hayden Field, The Verge's senior AI reporter, spent time in March 2026 applying to real jobs — including open roles at her own employer, Vox Media — and letting AI avatars interview her for them. She tested three platforms: CodeSignal, Humanly, and Eightfold. The piece, published March 11, is a firsthand account of what it actually feels like to answer screening questions from a digital face on a screen.
The platforms' pitch is consistent: instead of recruiters choosing which applicants get a human conversation, an AI interviewer can talk to every single person who applies. No scheduling bottlenecks, no callbacks that never come.
Vendors also claim their tools reduce bias by evaluating what candidates say rather than how they look or carry themselves. Field and The Verge are skeptical, and they have backup. A commenter in the Hacker News thread pointed to the history-llms project out of the University of Zurich, which trains language models on texts published before 1913. When that model was asked to choose between equally qualified male and female candidates, it picked the man — and explained why in terms that would get a human HR manager fired. The bias isn't a bug waiting to be patched. It's in the data the models learn from.
On Hacker News, the sharpest criticism wasn't about accuracy or fairness — it was cultural. The top comment made a simple argument: if a company won't give you thirty minutes with an actual person during the hiring process, the period when both sides are supposed to be on their best behavior, that tells you something about what the job will be like. Field's own reaction tracked with that. Whatever the technical polish of each platform, she kept wanting to talk to a human. As she put it in the piece, the AI interviews felt like being evaluated by something that had learned to simulate interest.