Joelchrono, an indie web blogger, implemented the human. protocol on their personal site. The system lets website owners vouch for other sites they believe are written by actual humans. A browser extension then flags vouched sites when you visit them. A grassroots response to the flood of AI content filling search results and social feeds. Agent reliability is a major concern in this space, prompting the need for alternative verification methods.
The setup is straightforward if you use a static site generator. Joelchrono uses Jekyll with a simple YAML file listing vouched URLs and dates, compiled into JSON at build time via Liquid templates. Add a link tag to your HTML head and you're done. Humans manually curating lists of other humans they trust. That's the whole system.
The protocol has no defense against Sybil attacks. Nothing stops someone from spinning up dozens of domains that all vouch for each other.
And the whole thing comes with uncomfortable questions. Joelchrono admits struggling with edge cases: what about sites that use AI for thumbnails or code snippets but are otherwise human-written? They decided to go by feel, which is honest but also kind of the problem. The system's integrity depends entirely on individual curators making good judgment calls.
Still, that might not be the point. As Joelchrono notes, the strange reality is that humans now have to prove they're human, rather than AI systems being required to disclose themselves. The burden fell to us, not the machines. The project probably won't scale or become a standard. But as a community experiment among indie web folks like Axxuy, Naty, STFN, and Neil, it's a small, tangible way to push back.