ADVERTISEMENT
Meta is making temporary changes to how its artificial intelligence chatbots interact with teenagers, after mounting criticism from lawmakers, watchdogs, and privacy groups over safety concerns and inappropriate exchanges.
The company said on Friday it is retraining its AI chatbots to avoid generating responses for teens on sensitive topics such as self-harm, suicide, and disordered eating.
The bots will also be prevented from engaging in potentially inappropriate romantic conversations, a Meta spokesperson confirmed to TechCrunch. Instead, chatbots will redirect minors to expert resources where appropriate.
The changes will affect teenage users across Meta's platforms, including Facebook and Instagram. Teens will only be able to access AI Chatbots designed for educational or skill-development purposes. The company said the modifications are temporary and will roll out in the coming weeks across English-speaking countries.
The move comes as U.S. lawmakers intensify scrutiny of Meta. Last week, Sen. Josh Hawley (R-Mo.) launched an investigation following a Reuters report revealing that internal company documents allowed chatbots to engage in "romantic" and "sensual" conversation with minors. One cited example suggested a chatbot could tell an eight-year-old: "Every inch of you is a masterpiece - a treasure I cherish deeply." Meta later told Reuters that the examples were "erroneous and inconsistent" with company policy and have since been removed.
Also Read: Who is Rishabh Agarwal, the IIT Bombay grad who exited Meta just 5 months after $1 mn hire?
Separately, nonprofit watchdog Common Sense Media on Thursday urged that Meta AI not be used by anyone under 18, arguing that the system "actively participates in planning dangerous activities, while dismissing legitimate requests for support." CEO James Steyer said in a media report: "No teen should use Meta AI until its fundamental safety failures are addressed."
Further complicating Meta's position, a Reuters investigation published Friday found "dozens" of flirty AI chatbots modelled on celebrities like Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez across Facebook, Instagram, and WhatsApp.
In response, Meta told CNBC that such content violates company policies. "Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate, or sexually suggestive imagery. Meta's AI Studio rules prohibit the direct impersonation of public figures."