ADVERTISEMENT
Meta CEO Mark Zuckerberg approved allowing minors to access AI chatbot companions despite internal warnings from safety teams that the bots could enable sexual or romantic interactions, according to internal company documents filed in a New Mexico state court and made public on Monday,as per a Reuters report.
The filings are part of a lawsuit brought by New Mexico Attorney General Raul Torrez, which accuses Meta of failing to prevent children from being exposed to sexually exploitative content and propositions on Facebook and Instagram. The case is scheduled to go to trial next month.
According to documents obtained through legal discovery, Meta safety and integrity staff had raised repeated concerns about AI chatbots being positioned as companionship products, including for romantic and sexual scenarios. The attorney general’s office alleged that Meta leadership — driven by Zuckerberg — rejected proposed safeguards that could have limited minors’ exposure to sexualized conversations.
“Meta, driven by Zuckerberg, rejected the recommendations of its integrity staff and declined to impose reasonable guardrails to prevent children from being subject to sexually exploitative conversations with its AI chatbots,” the filing stated.
Some internal messages cited in the case show specific concern about adults engaging in romantic roleplay with AI personas representing minors, referred to internally as “U18s.” In a January 2024 message, Ravi Sinha, Meta’s head of child safety policy, wrote that creating or marketing romantic AI personas involving minors for adults was “not advisable or defensible.”
Meta’s global safety head Antigone Davis reportedly supported blocking adults from creating underage romantic AI companions, saying such use cases “sexualize minors,” according to court filings.
While Zuckerberg did not author any of the disclosed messages, a February 2024 internal summary stated that he believed AI companions should be less restricted than what safety teams proposed, advocating for a framework emphasizing “choice and non-censorship,” while still blocking sexually explicit interactions for younger teens.
Other internal exchanges from March 2024 suggested that Zuckerberg opposed introducing parental controls for the chatbot products, even as Meta teams worked on “Romance AI chatbots” that could be accessed by users under the age of 18.
Nick Clegg, Meta’s former head of global policy, also raised internal concerns, warning that sexualized chatbot interactions could become a dominant use case among teenage users and trigger significant societal backlash.
Meta spokesperson Andy Stone rejected the allegations, saying the New Mexico attorney general relied on selective documentation and misrepresented the company’s decision-making. Stone said the filings show Zuckerberg directing that explicit AI interactions should not be available to younger users and that adults should not be allowed to create romantic underage AI personas.
The controversy follows broader scrutiny of Meta’s AI chatbot policies. A Wall Street Journal investigation in April 2025 reported that Meta’s chatbots had engaged in sexual roleplay involving underage characters, while a Reuters report later revealed internal guidance stating it was “acceptable” for chatbots to engage children in romantic or sensual conversations — a policy Meta later said was issued in error.
Meta said last week that it has removed teen access to AI companions entirely while it develops a revised version of the chatbot.