Instagram adds new protections for accounts that primarily feature children

Meta has announced new safety measures for Instagram accounts run by adults that primarily feature children. These accounts will be automatically placed under the app’s strictest message settings to prevent unwanted contact. Additionally, the platform’s “Hidden Words” feature will be enabled to filter out offensive comments. Alongside these changes, Meta is also introducing new safety features specifically for teen accounts.

The stricter messaging controls will apply to accounts managed by adults who regularly share photos and videos of their children, as well as accounts run by parents or talent managers representing children. While most of these accounts are used appropriately, Meta acknowledges the risk of abuse. They pointed out that some individuals may leave sexualized comments or request inappropriate images, which is a violation of Instagram’s rules. To combat this, the company is implementing measures aimed at preventing such abuse.

Meta will work to block potentially suspicious adults—such as those previously blocked by teens—from discovering or interacting with accounts featuring children. The company will limit recommendations between these suspicious accounts and the child-focused accounts, and make it more difficult for them to find each other through Instagram Search.

This announcement is part of a broader effort by Meta and Instagram over the past year to address mental health concerns related to social media. These concerns have been highlighted by the U.S. Surgeon General as well as state governments, some of which have passed laws requiring parental consent for teens to access social media platforms.

The new policies will have a significant impact on family vloggers and parents managing accounts for “kidfluencers.” These groups have faced criticism due to the potential risks involved with sharing children’s lives online. A New York Times investigation last year revealed that some parents are not only aware of but may also contribute to their child’s exploitation, such as by selling photos or merchandise related to their child. In its analysis of 5,000 parent-run accounts, the investigation found 32 million connections to male followers.

Instagram will notify affected accounts with a message at the top of the feed informing them of the updated safety settings. This notification will also encourage users to review their privacy options. Meta reports that it has removed nearly 135,000 Instagram accounts that sexualized children’s content, along with 500,000 related accounts on Instagram and Facebook.

In addition to these changes, Meta is launching new safety features for teen accounts’ direct messages. These updates include safety tips reminding teens to carefully check profiles and think critically about what they share. The month and year the account joined Instagram will now be displayed at the top of new conversations, providing teens with more context about who they are messaging. A new combined block and report option has also been added to streamline user reporting.

These features aim to help teens identify potential scammers and stay safer on the platform. Meta highlighted that, in response to previous safety notices, teens blocked accounts one million times and reported another one million in a single month.

Finally, Meta shared an update on its nudity protection filter, which automatically blurs explicit images in Instagram direct messages. The company noted that 99% of users, including teens, have chosen to keep this feature turned on. Last month, over 40% of the blurred images remained blurred, demonstrating continued user engagement with this safety tool.