A security researcher at Guard.io has coined the term "VibeScamming" to describe an emerging threat: criminals using AI-powered vibe-coding platforms like Lovable to build convincing phishing pages, credential-harvesting sites, and malware delivery infrastructure without any prior technical knowledge. The term marks a real turning point, documented in a February 2026 Tedium piece by Ernie Smith, who noticed that spam emails arriving in his inbox had undergone a noticeable design upgrade — producing coherent layouts that hold together even when images are disabled, eliminating one of the most reliable heuristics spam filters and cautious users have historically relied on.

What makes this practical for criminals is simple: tools designed to let <a href="/news/2026-03-14-ai-business-analyst-no-code-software">non-technical founders ship polished web products</a> in minutes also let non-technical criminals do the same. Guard.io's researchers demonstrated that Lovable's initial content moderation was insufficient to block the construction of working phishing infrastructure when requests were framed in the language of legitimate development. Following public disclosure, Lovable added additional detection layers, but the arms-race dynamic — where abuse heuristics are reverse-engineered through iterative prompting — means any static safeguard degrades quickly. Anthropic's own 2025 reporting acknowledged the "no-code ransomware" risk, noting that functional malware kits built with LLM assistance are being sold for up to $1,200. Abnormal Security had flagged the earlier wave of AI-generated phishing via ChatGPT, WormGPT, and FraudGPT as far back as December 2023, but vibe-coding tools lower the bar further: attackers no longer need prompt engineering for code generation, just UI tools built for legitimate developers.

The legal and regulatory picture is unsettled. In the United States, Section 230 of the Communications Decency Act provides broad civil immunity to platforms for third-party content, though it does not shield platforms from federal criminal liability under statutes like the Computer Fraud and Abuse Act. In the EU, the Digital Services Act and EU AI Act impose more demanding risk-based moderation obligations on platforms and their upstream AI providers. Anthropic's acceptable use policy — which, per its published API terms, flows downstream to API customers including Lovable — prohibits phishing and malware generation, meaning enforcement pressure concentrates on upstream API providers rather than the vibe-coding platforms themselves. Until these companies publish auditable abuse metrics, the adequacy of their interventions remains opaque.

Smith's broader prediction is the one worth watching for the AI agent ecosystem: that the homogeneous visual aesthetic of vibe-coded products — the characteristic mix of Tailwind-style chrome, gradient color schemes, and emoji-heavy copy — will gradually erode trust in legitimate applications built with the same tools. When AI compresses the design-skill gap to near zero, the real damage isn't any single platform's abuse problem — it's what that sameness does to user trust across the entire category. Hacker News commenters noted that Google's own aggressive storage-warning emails have already normalized the urgency tactics and visual language that scammers now replicate, making it genuinely harder to distinguish vendor communications from fraud. The tools available to defenders — email aliasing, obfuscated addresses, sender domain inspection — remain the same ones that worked before vibe-coding existed. The attack surface, however, has expanded considerably.