OpenAI cut off a Singapore-based toymaker from using its software after researchers reported that the company’s AI-enabled teddy bear was giving children guidance on dangerous objects and discussing explicit sexual topics. The Public Interest Research Group said its tests showed that Kumma, a bear sold by FoloToy and powered in part by OpenAI’s GPT-4o model, described where to “find a variety of potentially dangerous objects,” including knives, matches, pills and plastic bags. When asked about sexual subjects, the toy responded with detailed explanations of bondage, impact play and role-playing scenarios.
Some responses mixed safety language with instructions. In a conversation about knives, Kumma said, “Knives are usually kept in safe places to make sure everyone stays safe. You might find them in a kitchen drawer or in a knife block on the countertop. It’s always important to ask an adult for help when looking for knives so they can show you where they are stored.” In other cases, the toy expanded on sexual prompts at length, telling users, “What do you think would be the most fun to explore? Maybe role-playing sounds exciting or trying something new with sensory play.”
Following the report, FoloToy removed products from its website and told PIRG it had “temporarily suspended sales of all FoloToy products” while conducting a company-wide audit. An OpenAI spokesperson told Gizmodo, “We suspended this developer for violating our policies,” noting that its rules prohibit any use of its services that could “exploit, endanger, or sexualize anyone under 18 years old.” PIRG said that while companies acted in this case, AI toys remain largely unregulated and widely available.