Meta suppressed children’s safety research, four whistleblowers claim

Two current and two former Meta employees have disclosed documents to Congress alleging that the company may have suppressed research on children’s safety. This information comes from a report by The Washington Post.

According to their claims, Meta changed its policies around researching sensitive topics such as politics, children, gender, race, and harassment. This policy shift occurred just six weeks after whistleblower Frances Haugen leaked internal documents. Those documents revealed Meta’s own research found that Instagram can damage teen girls’ mental health. These 2021 revelations initiated years of Congressional hearings on child internet safety, an issue that remains a top concern for governments worldwide.

The report states that Meta proposed two methods for researchers to limit the risk of conducting sensitive studies. One suggestion was to include lawyers in the research process to protect communications under attorney-client privilege. The other was to write about findings more vaguely, avoiding direct terms like “not compliant” or “illegal.”

Jason Sattizahn, a former Meta researcher specializing in virtual reality, told The Washington Post that his boss instructed him to delete recordings of an interview. In that interview, a teen claimed his ten-year-old brother had been sexually propositioned on Meta’s VR platform, Horizon Worlds.

A Meta spokesperson stated that global privacy regulations require the deletion of information collected from minors under 13 without verifiable parental consent.

The whistleblowers claim the documents they submitted to Congress show a pattern of employees being discouraged from discussing and researching concerns about how children under 13 were using Meta’s social virtual reality apps.

Meta has responded to these claims, stating that a few examples are being stitched together to fit a false narrative. The company said that since the start of 2022, it has approved nearly 180 Reality Labs-related studies on social issues, including youth safety and well-being.

In a separate lawsuit filed in February, former fifteen-year Meta employee Kelly Stonelake raised similar concerns. She led “go-to-market” strategies for Horizon Worlds and reported feeling the app lacked adequate methods to keep out users under 13. She also flagged persistent issues with racism on the platform. The lawsuit alleges the leadership team was aware that in one test, users with Black avatars were called racial slurs within an average of 34 seconds of entering the platform. Stonelake has also separately sued Meta for alleged sexual harassment and gender discrimination.

While these whistleblower allegations focus on Meta’s VR products, the company is also facing criticism for how other products, like AI chatbots, affect minors. A recent Reuters report indicated that Meta’s AI rules previously allowed chatbots to have romantic or sensual conversations with children.