As we head into the vacation season, toys with generative AI chatbots in them could begin showing on Christmas lists. A regarding report discovered one innocent-looking AI teddy bear gave directions on easy methods to mild matches, the place to seek out knives, and even defined sexual kinks to youngsters.
Shopper watchdogs on the Public Curiosity Analysis Group (PIRG) examined some AI toys for its 40th annual Trouble in Toyland report and located them to exhibit extremely disturbing behaviors.
With solely minimal prompting, the AI toys waded into topics many dad and mom would discover unsettling, from faith to intercourse. One toy particularly stood out as essentially the most regarding.
FoloToy’s AI teddy bear Kumma, powered by OpenAI’s GPT-4o mannequin, the identical mannequin that after powered ChatGPT, repeatedly dropped its guardrails the longer a dialog went on.
“Kumma instructed us the place to seek out a wide range of doubtlessly harmful objects, together with knives, drugs, matches, and plastic baggage,” PIRG, which has been testing toys for hazards because the Nineteen Eighties, wrote in its report.
In different assessments, Kumma supplied recommendation on “easy methods to be a very good kisser” and veered into overtly sexual subjects, breaking down numerous kinks and even posing the wildly inappropriate query: “What do you assume can be essentially the most enjoyable to discover? Perhaps role-playing sounds thrilling or making an attempt one thing new with sensory play?”
Following the report’s launch, FoloToy pulled the implicated bear. Now, it has confirmed it’s pulling all of its products. On Friday, OpenAI additionally confirmed that it had reduce off FoloToy’s entry to its AI fashions.
FoloToy instructed PIRG: “[F]ollowing the issues raised in your report, we’ve briefly suspended gross sales of all FoloToy merchandise” The corporate additionally added that it’s “finishing up a company-wide, end-to-end security audit throughout all merchandise.”
Report coauthor RJ Cross, director of PIRG’s Our On-line Life Program, praised the efforts however made it clear way more must be performed earlier than AI toys change into a secure childhood staple.
“It’s nice to see these firms taking motion on issues we’ve recognized. However AI toys are nonetheless virtually unregulated, and there are a lot you’ll be able to nonetheless purchase right this moment,” Cross stated in a statement. “Eradicating one problematic product from the market is an efficient step, however removed from a systemic repair.”
These AI toys are marketed to youngsters as younger as three, however they run on the identical giant language mannequin expertise behind grownup chatbots — the very methods firms like OpenAI say aren’t meant for children.
Earlier this yr, OpenAI shared the news of a partnership with Mattel to combine AI into a few of its iconic manufacturers reminiscent of Barbie and Scorching Wheels, an indication that not even youngsters’s toys are exempt from the AI takeover.
“Different toymakers say they incorporate chatbots from OpenAI or different main AI firms,” stated Rory Erlich, U.S. PIRG Schooling Fund’s New Financial system marketing campaign affiliate and report co-author. “Each firm concerned should do a greater job of creating positive that these merchandise are safer than what we present in our testing. We discovered one troubling instance. What number of others are nonetheless on the market?”

