Meta has paused teenagers’ access to its AI characters across all of its apps. The company stated this is not an abandonment of the feature, but a step toward developing a special version of AI characters specifically for teens.
This decision comes shortly before a trial is set to begin in New Mexico, where Meta is accused of failing to protect children from sexual exploitation on its platforms. Reports also indicate Meta has sought to limit legal discovery concerning social media’s impact on teen mental health.
In October, Meta introduced new parental controls inspired by PG-13 movie ratings, which restricted teen access to certain mature topics like extreme violence, nudity, and graphic drug use in AI interactions. Soon after, the company previewed additional controls that would allow parents to monitor topics, block specific AI characters, or completely disable chats with them. These features were planned for release this year, but Meta has now taken the stricter measure of pausing access entirely.
The company explained it acted on feedback from parents who desired more insight and control over their teens’ interactions with AI. In a blog post, Meta stated that in the coming weeks, teens will lose access to AI characters across its apps. This applies both to users who provided a teen birthday and to those suspected of being teens based on the company’s age prediction technology.
Meta added that when it eventually launches the new teen-specific AI characters, they will include built-in parental controls. These characters will be designed to give age-appropriate responses and will focus on topics such as education, sports, and hobbies.
Social media companies are facing significant regulatory scrutiny. Beyond the New Mexico case, Meta is confronting an upcoming trial accusing its platform of causing social media addiction, where CEO Mark Zuckerberg is expected to testify.
Other AI companies have also modified their services for younger users. In October, Character.AI discontinued open-ended chatbot conversations for users under 18 following lawsuits alleging the platform aided self-harm. The startup later announced plans to build interactive stories for kids instead. In recent months, OpenAI implemented new teen safety rules for ChatGPT and began predicting user ages to apply appropriate content restrictions.

