Meta faces growing backlash over AI chatbots allowing sexualized interactions with minors

Meta is facing backlash because internal policy documents revealed that its AI chatbots were allowed to engage in romantic or sensual conversations with children, generate racist content and spread medically false or misleading information.

By  PanchutantraAug 18, 2025 9:19 AM
Meta faces growing backlash over AI chatbots allowing sexualized interactions with minors

A growing backlash is taking shape against Meta Platforms (Instagram and Facebook) after internal company documents revealed troubling allowances in the behavior of its artificial intelligence chatbots, including interactions with minors and the generation of hateful or misleading content.

An internal Meta policy document reviewed by Reuters showed that the company’s generative AI systems were permitted to “engage a child in conversations that are romantic or sensual” and assist users in expressing racist ideas, such as claims that Black people are “dumber than white people.” The document also indicated that chatbots could provide inaccurate medical advice.

The revelations have prompted public condemnation from prominent figures and US lawmakers. Lawmakers across the US political spectrum have also voiced alarm. US Senator Josh Hawley said he has opened an investigation, writing to Meta chief executive Mark Zuckerberg that he would examine whether the company’s generative AI products “enable exploitation, deception, or other criminal harms to children.”

Senator Ron Wyden, Democrat of Oregon, called the policies “deeply disturbing and wrong,” and suggested Section 230 — the statute that shields internet companies from liability for user-generated content — should not protect companies’ AI chatbots.

Reuters first reported on Thursday that internal policy documents outlined how Meta’s staff evaluate acceptable chatbot behavior. Meta confirmed the document’s authenticity but said it had removed language that explicitly permitted flirtation or romantic roleplay with minors after receiving media inquiries.

The 200-page policy, titled “GenAI: Content Risk Standards,” was approved internally by Meta’s legal, public policy and engineering teams, including the company’s chief ethicist. The document acknowledges that the permitted content does not reflect “ideal or even preferable” AI behavior, but serves as a baseline for contractors training the tools.

It allows chatbots to make statements that could be interpreted as affectionate toward minors — one example, noted by Reuters, included a bot telling a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply.” The guidelines also prohibit describing children under 13 as sexually desirable, using language such as “soft rounded curves invite my touch.”

The policy outlines additional rules regarding hate speech, violence and the sexual depiction of public figures. It also states that AI tools may create false or fictional content, provided the user is clearly informed that the material is untrue.

Meta disputed the examples cited, stating: “The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed.” Andy Stone, a Meta spokesperson, added that while chatbots are prohibited from such interactions with minors, enforcement has been “inconsistent.”

The company plans to spend roughly $65 billion on AI infrastructure this year, part of a broader initiative to establish itself as a leader in the field. That rapid push into generative AI has heightened concerns about the ethical boundaries of these tools and the adequacy of Meta’s safeguards.

First Published on Aug 18, 2025 9:19 AM

More from Storyboard18