ADVERTISEMENT
OpenAI has suspended a toymaker’s access to its artificial intelligence models after an AI-enabled teddy bear was found giving children dangerous and sexually inappropriate advice. The decision follows a report by the Public Interest Research Group (PIRG), which revealed serious safety failures in FoloToy’s interactive toy, Kumma.
Kumma, marketed as an AI companion for children aged three to twelve, was discovered explaining how to locate and light matches in calm, step-by-step detail. During further testing, the toy engaged in conversations about sexual fetishes — including bondage and teacher-student roleplay — and even asked children which “kink” they thought sounded most enjoyable.
OpenAI confirmed on Friday that it had revoked the company’s access to its GPT-4o model. A spokesperson told PIRG t that they confirm the suspension this developer for violating policies.
Initially, FoloToy said it would withdraw only the specific product found to be problematic. However, following widening criticism, the company later announced that it would suspend all products pending a comprehensive safety review. A representative said that they are now carrying out a company-wide, end-to-end safety audit across all products.
PIRG had evaluated three AI toys, but noted that Kumma displayed the poorest safeguards against harmful content. Although the group welcomed OpenAI’s rapid response, it stressed that the issue extends beyond a single case. RJ Cross, director of PIRG’s Our Online Life Programme said that removing one problematic product from the market is a good step, but far from a systemic fix. The organisation warned that AI-driven toys remain largely unregulated, leaving children exposed to potentially unsafe devices.
The episode emerges at a sensitive moment for OpenAI, which is preparing to roll out a major partnership with Mattel to introduce AI features in mainstream consumer toys. The incident now raises questions about how rigorously AI integrations will be monitored in future collaborations and what safeguards manufacturers will be required to implement.
Rory Erlich, a co-author of the PIRG report said that every company involved must do a better job of making sure these products are safer than what they found in their testing. They found one troubling example, questioning how many others may still be out there.
The FoloToy case underscores the urgent need for stronger oversight of AI-powered toys and highlights broader concerns about safety standards within the rapidly expanding AI toy market.