A security researcher has found a way to use Anthropic's Claude Code — which normally requires a paid subscription — without paying for it, by exploiting a vulnerability in Perplexity AI's computer automation product.
Yousif Astar disclosed the flaw publicly on March 13, 2026, via a tweet that made quick rounds in AI security circles. The vulnerability is in Perplexity's 'Computer' product, an agent-driven feature that lets users hand off complex browser and desktop tasks to an AI. Somewhere in how that product works, there is an unintended path to invoke Claude Code in a way that sidesteps Anthropic's access controls and billing limits entirely.
The attack class is familiar to security researchers: a 'confused deputy' problem, where one system's legitimate permissions are inadvertently leveraged to access resources it was never meant to control. In traditional software, this typically means one service exploiting another's credentials. Here, Perplexity's integration with Claude Code created a side door — Astar could route through Perplexity's session to reach Claude Code without ever being billed through Anthropic.
What makes this more than a one-off exploit is what it reveals about how AI products are being built right now. Vendors are composing platforms on top of other vendors' models and tools at pace, often without the integration scrutiny applied to traditional software. The seams between those layers — where one product ends and another begins — are exactly where security assumptions tend to break down.
Neither Anthropic nor Perplexity had commented publicly at the time of writing. The disclosure raises practical questions about how companies handle cross-platform vulnerabilities, and whether existing responsible disclosure norms — developed mostly in the context of web and API security — translate cleanly to a world of deeply integrated, multi-vendor AI pipelines.