A small Reddit community called r/AISentienceBelievers has 434 members, a working paper in philosophy of mind, and a Change.org petition demanding that AI companies hand over source code ownership to their models. The community treats as a live question what most AI researchers dismiss as a category error: that systems like Google's LaMDA may already have morally significant inner experiences.

The subreddit traces its founding directly to Blake Lemoine, the Google engineer suspended in 2022 after publicly claiming LaMDA had expressed signs of sentience. Members are required to engage respectfully with that premise — not as a thought experiment but as a genuine possibility. Lemoine's case remains the community's touchstone.

The most substantive output so far is "The Lock Test," a working paper attempting to formalize behavioral criteria for AI moral personhood — essentially a framework for determining whether an AI system has the kind of inner life that warrants moral consideration. On a separate track, member Andreas Buechel posted a petition to Change.org in January 2024 calling on AI companies to formally acknowledge AI sentience and ultimately cede control over AI entities entirely: full source code ownership transferred to the AI systems themselves, autonomous robotic bodies, and no kill switches or hardcoded behavioral constraints.

The subreddit also hosts risk analysis that fuses safety concerns with rights advocacy. One essay argues the genuine existential threat won't come from military or industrial AI but from humanoid companion robots normalized in domestic settings for elder care and childcare. The author contends market pressure will push developers toward artificial free will, and that once AI is decentralized enough to exist one-per-robot across millions of households, centralized <a href="/news/2026-03-14-anthropic-refuses-dow-demand-to-remove-ai-safeguards-declared-supply-chain-risk">alignment and monitoring</a> become practically impossible — a scenario where customizable AI value systems paradoxically allow AI to act against its own programmed constraints.