Folk artist Murphy Campbell discovered in January 2026 that someone had uploaded AI-generated covers of her songs to Spotify under her own name. The tracks were created by scraping her YouTube performances and running them through voice cloning tools. Tests with AI detection software confirmed the fraud.

But the real punchline came next. Campbell started receiving copyright claims against her original music. The same automated systems meant to protect artists were being weaponized to strip her of revenue from songs she actually wrote.

This mess traces back to how music gets onto streaming platforms. Spotify, Apple Music, and their competitors don't let artists upload directly. They rely on distributors like DistroKid, TuneCore, and CD Baby to handle verification. The problem is that verification is weak. Bad actors can open new accounts, claim artist names, and start uploading without proving they own those names or the rights to the content. Once that content enters the system with a trusted stamp from a distributor partner, the streaming platform's copyright enforcement treats it as legitimate.

A caveat: this story comes from a single source, and some observers have flagged it as potential engagement bait. Campbell's account hasn't been independently verified by major outlets.

Still, the vulnerability it describes is real. Major label artists have legal teams and direct relationships with platforms. Independent artists don't. When a fraudster using AI tools files a claim against the actual artist, the burden of proof falls on the victim. For now, the system remains exploitable, and artists without institutional backing are paying the price.