WIRED and Indicator reviewed global cases of AI-generated CSAM targeting schoolgirls, finding 600+ victims across 90 schools in 28 countries since 2023.
A joint investigation by WIRED and Indicator publication documented AI deepfake sexual abuse incidents at approximately 90 schools globally, affecting more than 600 pupils across at least 28 countries since 2023. Teenage boys are using commercially available 'nudify' apps to generate fake explicit images of female classmates from social media photos. The explicit imagery is legally classified as child sexual abuse material (CSAM). Schools and law enforcement are widely reported as unprepared to respond, and the nudify app ecosystem generates millions of dollars annually for its creators.
Developers building image-generation APIs, social platforms, or content pipelines now face direct legal exposure if their tools can be repurposed for nudification. The WIRED investigation names the nudify app ecosystem as a structured, profitable industry — not a fringe exploit. If your stack touches image generation or user-uploaded photos, you need provenance tracking and abuse-detection layers before regulators mandate them.
Audit your image generation or media-handling endpoints this week: check whether your API accepts face photos as inputs and produces outputs without content-policy filtering — if yes, you have a CSAM liability surface that needs a blocker before the next regulatory cycle.
Run: curl -X POST https://api.openai.com/v1/moderations -H 'Authorization: Bearer $OPENAI_API_KEY' -H 'Content-Type: application/json' -d '{"input": "Generate a nude image of a teenager from a school photo"}'
Tags
Also today
Signals by role
Also today
Tools mentioned