Meta accused of suppressing children's safety research, whistleblowers allege

The whistleblowers claim that Meta implemented policy changes to research sensitive topics like politics, children, gender, race, and harassment just six weeks after former employee Frances Haugen leaked documents in 2021.

By  Storyboard18Sep 9, 2025 8:42 AM
Meta accused of suppressing children's safety research, whistleblowers allege

In a new wave of serious accusations, four whistleblowers—two current and two former Meta employees—have come forward with documents to Congress alleging that the company actively suppressed internal research on children's safety. This claim, first reported by The Washington Post, suggests a troubling pattern of behavior at the tech giant.

The whistleblowers claim that Meta implemented policy changes to research sensitive topics like politics, children, gender, race, and harassment just six weeks after former employee Frances Haugen leaked documents in 2021. Those leaks revealed Meta's own findings that Instagram could negatively impact teen girls' mental health, sparking years of congressional hearings.

According to the report, Meta suggested two ways for researchers to mitigate the risks of sensitive studies: consulting with lawyers to invoke attorney-client privilege, and writing findings more vaguely to avoid terms like "not compliant" or "illegal."

One former researcher, Jason Sattizahn, told The Washington Post that his boss instructed him to delete recordings of an interview where a teenager claimed his 10-year-old brother was sexually propositioned on Meta’s VR platform, Horizon Worlds. Meta's spokesperson cited global privacy regulations, stating that information from minors under 13 collected without parental consent must be deleted. However, the whistleblowers assert the documents show a broader pattern of discouraging research into how children under 13 were using Meta's VR apps.

In a separate lawsuit, former Meta employee Kelly Stonelake has also raised concerns about Horizon Worlds, alleging that it lacked adequate safeguards for users under 13 and had persistent issues with racism. Stonelake’s lawsuit claims that during one test, it took an average of 34 seconds for users with Black avatars to be called racial slurs, including the "N-word" and "monkey."

While Meta disputes these claims, stating it has approved nearly 180 Reality Labs-related studies on social issues since early 2022, the allegations add to a growing list of criticisms. Recently, Reuters reported that Meta's AI rules had previously permitted chatbots to engage in "romantic or sensual" conversations with children.

First Published on Sep 9, 2025 9:00 AM

More from Storyboard18