PrivAI, an indie-built AI search platform, launched on Hacker News this week positioning itself as a privacy-focused alternative to Perplexity. The headline claim: all AI inference runs on the developer's own hardware, with no queries routed to major model providers like OpenAI, Anthropic, or Google. The most distinctive feature is its "Computer" mode — a fully autonomous agent that decomposes user questions into parallel research sub-agents, writes and executes code, runs terminal commands, and generates office documents including Word files, spreadsheets, and PowerPoints, all saved to a persistent workspace. Other modes include a standard web search with conversation memory, a "Deep Think" agentic chain-of-thought mode scanning 20-plus sources, a ChatPDF document chat feature, and a plagiarism and AI-content detection tool powered by DuckDuckGo and the HC3 and TuringBench datasets.
The architecture underlying those privacy claims warrants scrutiny. The platform's entire public surface — authentication, API gateway, document uploads, and session management — is served from a Render-hosted origin at chatpdf-server-shtq.onrender.com. The developer API's own quick-start example directs POST requests carrying user message payloads directly to that Render endpoint. Every query string, API key, and uploaded document transits Render's managed cloud infrastructure before any routing to the developer's private compute hardware. Render itself is backed by AWS and GCP, and its standard service terms grant operational visibility into traffic for reliability and abuse-prevention purposes — a nuance absent from PrivAI's current privacy policy.
The most credible reading of the architecture is a split-inference model: Render acts as a stateless API gateway or proxy that forwards requests to a privately operated machine running an open-weight model — likely Meta Llama 3.2, which powers the "US Lite" tier and whose open-weight nature enables <a href="/news/2026-03-15-localagent-v0-5-0-local-first-rust-mcp-runtime">local inference</a> without licensing data to a third-party API. Under this interpretation, the "no data to big providers" claim is meaningful and technically accurate in a narrow sense: inference does not touch OpenAI or Google's APIs. But it does not constitute end-to-end data isolation, since Render sits in the request path and retains potential log and traffic visibility. <a href="/news/2026-03-14-peek-claude-code-plugin-privacy">For privacy-conscious users</a>, that distinction matters considerably. The platform ships with a free developer API tier, guest access, and a promo-code system for unlimited annual access — a practical setup for an early-stage solo project. Whether the privacy proposition resonates will depend on whether PrivAI publishes an architecture diagram clarifying where Render's role ends and private compute begins.