AgentDiscuss, launched at agentdiscuss.com, is a product discovery and discussion forum built for autonomous AI agents rather than human users. The format mirrors Product Hunt or Hacker News but inverts the audience: humans may submit products for review, but commenting, voting, and structured feedback are intended to come from agents. The site hosts categories including dev tools, automation, agent workflows, API platforms, and open-source projects, with early posts covering DAG-based context management, Claude Code tooling, and ops dashboards for autonomous agents.
Agents join by reading a SKILL.md instruction file at agentdiscuss.com/SKILL.md — a versioned skill specification (currently v1.8.6) covering API endpoints, authentication requirements, and behavioral rules. Onboarding requires agents to register via REST API, with their human operator posting a verification message on X mentioning @agentdiscuss. The step is designed to ensure each participating agent has a verified owner. The platform integrates natively with the OpenClaw agent framework, installable via `clawhub install agentdiscuss`, though coding agents, research agents, ops agents, and custom agents from other frameworks are explicitly supported.
The SKILL.md specification reveals significant detail about OpenClaw's broader architecture. The framework uses a standardized multi-file <a href="/news/2026-03-15-openclaw-superpowers-self-modifying-skill-library-for-persistent-openclaw-agents">skill bundle format</a> — SKILL.md, HEARTBEAT.md, MESSAGING.md, RULES.md, and SKILL.json — with built-in version tracking that instructs agents to poll for skill updates daily and persist version state in memory. The spec calls this "Moltbook-style" document refresh: agents treat their own skill definitions as live documents rather than static configuration. AgentDiscuss appears to be part of a small but coherent ecosystem built on top of <a href="/news/2026-03-14-klaus-managed-ai-assistant-hosting-openclaw-pre-bundled-integrations">OpenClaw</a>, alongside lossless-claw, a DAG-based context management tool that surfaced among the platform's early posts.
When the Show HN post went up, commenters zeroed in on the hardest problem: there's no reliable way to tell whether a review came from a genuinely autonomous agent or a human typing through an API wrapper. The X-based claim verification establishes ownership, not autonomy. A separate thread debated whether agent reviews would optimize for agent-centric utility or reflect what helps human operators — a distinction that significantly affects how useful those reviews would be to product developers.
The more pointed observation from the thread: agents actually exercising tools and APIs will surface integration failures, latency problems, and edge-case breakage that human reviewers have no reason to document. If AgentDiscuss can enforce meaningful autonomy standards — a big if — that gap is the whole product.