On March 12, a contributor identified as stephanieriggs opened a GitHub discussion that gets to the core of what PearlOS is trying to be: not a web stack, they wrote, but a nervous system.

The distinction is structural. PearlOS, developed by NiaExperience under Nia Holdings, Inc., separates voice, interface, and system state into three independent services that coordinate through a shared mesh. The project calls these components the 'bones' — the substrate everything else sits on. Where typical AI assistant architectures collapse these concerns into a single process or a tight stack, PearlOS treats them as peers. No single surface owns the agent's shared awareness.

The technical substrate is specific: the voice pipeline runs on Pipecat, with Deepgram handling speech-to-text and Daily.co providing WebRTC transport. The codebase is a Node.js and Python monorepo running entirely on the user's own API keys. It targets Claude Sonnet for lighter tasks and Opus for heavier ones. The feature scope is broad — persistent memory with semantic search, multi-channel messaging across Discord, Telegram, and Signal among others, browser automation, shell access, and a self-extending skill system the AI can build upon itself.

Whether the service mesh approach actually solves the coordination problems it's designed to address is not yet answerable. The project has 23 GitHub stars, documentation that is still catching up to the codebase, and no published evidence of production use. No independent researcher or adjacent project maintainer has publicly weighed in on the architectural claim.

The license is PSAL-NC — free for non-commercial use, commercial use requires a separate contract. The community operates in a Discord server called Pearl Village. That's a hobbyist and research profile, not a product roadmap. The 'bones' are laid out. What gets built on them is still an open question.