A writer named Julia sued Grammarly this week, alleging the company's AI product generates content in her voice, tied to her name, without her knowledge or permission. The New York Times reported the case in an opinion piece on March 13, 2026. Julia's last name and the case docket number did not appear in that report; Grammarly has not commented publicly.
The lawsuit targets something distinct from the training-data disputes that have dominated AI litigation. Copyright suits brought by authors, publishers, and musicians argue that scraping and learning from copyrighted work requires a license. Julia's complaint, as described, concerns what Grammarly's product does after training: it produces text attributed to a real, named person. That is an impersonation claim, not an infringement claim.
The core legal theory is right of publicity — an individual's right to control commercial use of their name and identity. Potential defamation or false-light claims could follow if the AI-generated text misrepresents her actual views. Courts have applied right-of-publicity doctrine mostly to advertising and endorsement contexts. AI-generated personas tied to living individuals are untested territory.
The case drew criticism on Hacker News, where commenters questioned how the feature cleared internal review. No named commenters or quotable posts were available for attribution at publication time.
Several states — California and New York most prominently — have introduced AI-specific right-of-publicity bills that have not passed. A ruling in Julia's favor could strengthen the case for federal legislation and push Grammarly and competing tools to pull or redesign any feature that ties generative output directly to a named individual. The first hearing date has not been reported.