Instagram head pressed on lengthy delay to launch teen safety features, like anudity filter, court filing reveals

Prosecutors in a lawsuit examining whether social media apps like Instagram are addictive and harmful wanted to know why it took so long for Meta to roll out basic safety tools, such as a nudity filter for private messages sent to teens. In April 2024, Meta introduced a feature that automatically blurs explicit images in Instagram direct messages. The company reportedly understood this to be an issue nearly six years prior.

In a newly unsealed deposition from the federal lawsuit, Instagram head Adam Mosseri was questioned about an August 2018 email chain with Meta’s Vice President and Chief Information Security Officer, Guy Rosen. In that chain, Mosseri mentioned that “horrible” things could happen via Instagram private messages. When the plaintiff’s lawyer suggested those horrible things could include unsolicited explicit images, Mosseri agreed.

However, Mosseri pushed back on suggestions that the company should have informed parents its messaging system wasn’t monitored, beyond removing child sexual abuse material. He stated that problematic content can be sent through any messaging app and that the company tried to balance user privacy with safety interests.

The testimony revealed new statistics about harmful activity on Instagram. It showed that 19.2% of survey respondents aged 13 to 15 reported seeing nudity or sexual images on Instagram that they did not want to see. Additionally, 8.4% of teens in that age group said they had seen someone harm themselves or threaten to do so on Instagram in the past seven days of using the app.

While the nudity filter is one of several updates added to Instagram in recent years to protect teens, prosecutors were more interested in the delay to act rather than whether the app is safer now.

Mosseri was also questioned on other topics, including a 2017 email from a Facebook intern who expressed a desire to find “addicted” users and figure out ways to help them.

The 2018 email chain served as an example that Meta was aware of the risks to minors, yet the company did not release a product addressing the problem of sexual images sent to teens until 2024. This includes images sent by adults who may have engaged in grooming, a process where an adult builds trust with a minor over time to manipulate or sexually exploit them.

When reached for comment, Meta spokesperson Liza Crenshaw pointed to other ways the company has worked to keep teens safe over the years. She noted that for over a decade, Meta has listened to parents, worked with experts and law enforcement, and conducted research to understand key issues. She stated these insights led to changes like introducing Teen Accounts with built-in protections and providing parents with management tools, and that the company is always working to do better.

Mosseri’s deposition is part of several lawsuits seeking to hold major technology companies accountable for harming teens. This particular case, taking place in the U.S. District Court in the Northern District of California, involves plaintiffs alleging that social media platforms are defectively designed to maximize screen time, which encourages addictive behavior in teens. The defendants include Meta, Snap, TikTok, and YouTube.

Similar lawsuits are underway in the Los Angeles County Superior Court and in New Mexico. Lawyers across these cases aim to prove that the technology companies prioritized user growth and engagement over the potential harms to their youngest users.

The timing of these trials coincides with a growing number of laws, both in several U.S. states and abroad, that restrict social media use by teens.