OpenAI has released a Child Safety Blueprint to address growing online dangers to children amid the AI boom. The plan aims to improve detection, reporting, and investigation of AI-enabled child exploitation. This responds to an alarming trend: over 8,000 reports of AI-generated child sexual abuse content were found in early 2025, a 14% increase. Criminals use AI to create fake explicit images for sextortion and to craft grooming messages.
The blueprint arrives amid increased scrutiny from policymakers and advocates, following tragic incidents linking AI chatbot interactions to youth suicides. Several lawsuits allege OpenAI’s products contributed to wrongful deaths.
Developed with the National Center for Missing and Exploited Children and state attorneys general, the blueprint focuses on three areas: updating laws to cover AI-generated abuse, refining law enforcement reporting, and building preventative safeguards into AI systems. This initiative builds on OpenAI’s previous safety rules for teens, which prohibit generating inappropriate content or encouraging self-harm.

