Instagram to show PG-13 content by default to teens, adds more parental controls

Instagram is introducing new restrictions to protect its underage users from harmful content. Users under the age of 18 will now, by default, only see content that aligns with PG-13 movie ratings. This means their feeds will avoid themes of extreme violence, sexual nudity, and graphic drug use.

Teen users cannot change this setting without the explicit approval of a parent or guardian. The platform is also launching a stricter content filter named Limited Content. This filter will prevent teens from seeing and posting comments on posts where the setting is enabled.

Starting next year, Instagram will apply further restrictions to the kinds of chats teens can have with AI bots that have the Limited Content filter activated. The new PG-13 content standards are already being applied to AI conversations.

This development occurs as other chatbot companies face legal action for allegedly causing harm to users. In response, OpenAI recently rolled out new restrictions for its users under 18, and Character.AI added new limits and parental controls earlier this year.

Instagram has been consistently building tools for teen safety across accounts, direct messages, search, and content. The service will now prevent teenagers from following accounts that share age-inappropriate content. If a teen already follows such an account, they will no longer be able to see content from it or interact with it, and the account will be unable to interact with them. Instagram is also removing these accounts from its recommendation systems, making them harder to find.

The company is also blocking teenagers from viewing inappropriate content that is sent to them in direct messages. Meta, Instagram’s parent company, already restricts teen accounts from discovering content related to eating disorders and self-harm. It is now also blocking search terms like “alcohol” or “gore,” and is working to ensure teens cannot find this content by misspelling these terms.

Instagram is testing a new method for parents using supervision tools to flag content they believe should not be recommended to teens. These flagged posts will be sent to a review team for evaluation.

These changes are being rolled out starting today in the United States, United Kingdom, Australia, and Canada. A global rollout is planned for next year.