The European Union advanced a proposal this month to criminalize AI-generated child sexual abuse material, closing a legal gap that has existed since synthetic image tools became consumer-grade. Reuters reported the move on March 13; the measure targets generative AI output regardless of whether real children were involved in its creation.

Child protection agencies have flagged this gap for years. EU law on CSAM was written before diffusion models existed, leaving prosecutors uncertain whether synthetic imagery fell within existing prohibitions. The new proposal removes that ambiguity.

Not everyone agrees the ban is the right instrument. Some researchers argue that synthetic CSAM — generated without real victims — could reduce demand for material produced through actual abuse, cutting the financial incentives for exploitation. The evidence on whether access to such content increases or decreases real-world offending remains contested; studies have reached conflicting conclusions depending on jurisdiction and methodology. The opposing view, which underpins calls for still-broader restrictions, is that no reliable technical method exists to distinguish AI-generated imagery from photographs, making provenance-based enforcement largely unworkable — a problem that extends equally to non-consensual deepfakes.

EU officials quoted in the Reuters report described the proposal as a starting point rather than a complete framework. Legislators working on the file have acknowledged that updates will be required as image generation capabilities advance. The European Commission is expected to follow with additional measures targeting synthetic content before the end of 2026.