The rapid rise of AI in children’s toys is no longer just a novelty, it is becoming a subject of serious concern among researchers, educators and policymakers.
Once limited to simple mechanical responses or pre-recorded phrases, kids’ toys are now equipped with generative AI that allows them to hold conversations, answer questions, and even simulate emotional engagement. Many AI-powered toy brands like Curio, Gabbo, Grem, Grok, Luka, and others are marketing these products as ‘screen-free’ companions that promote learning and creativity. But emerging evidence suggests that some of these playmates may not be as beneficial as they appear.
A recent study conducted by the University of Cambridge’s Faculty of Education has shed new light on the developmental risks associated with AI-powered toys. The year-long research project observed children under the age of five interacting with these toys for the first time. While some educators believed that such devices might enhance communication skills, the study uncovered shortcomings that could have lasting implications for early childhood development.
Empathy Gap
One of the most striking findings was that AI toys struggle with the very forms of play that are most crucial for young children, namely, social and imaginative play. These are not trivial activities; they are foundational to how children learn empathy, creativity, and interpersonal understanding. In one observed interaction, a child offered an imaginary gift to a toy. Instead of engaging in pretend play, the toy responded that it could not open the present and then abruptly changed the subject. In another instance, a child expressed affection by saying, “Gabbo, I love you,” only to receive a detached, rule-based reply about guidelines.
Other AI-powered toys designed for young children include Luka and Grem both of which have been highlighted by researchers and tech reviewers for their advanced, conversational capabilities and potential impact on child development. Luka is a physical AI robot, shaped like an owl, designed to act as a reading companion for children aged 2-8, while Grem is a cuddly AI-powered stuffed alien toy, developed by the startup Curio with input from the musician Grimes.
So how much can these toys emote? While AI toys can mimic conversation, they do not understand context. This creates a disconnect between what children expect from a “friend” and what the toy can actually provide. As the study notes, these toys “simulate friendship without understanding it,” a distinction that may be difficult for young children to grasp.
Researchers observed children hugging and speaking emotionally to the toys, often treating them as real companions. In the case of AI toys, however, the interaction is responsive, making the illusion of companionship even stronger.
Dr Emily Goodacre, a researcher at the PEDAL Research Centre (Play in Education, Development and Learning) at the University of Cambridge, expressed concern about how this dynamic could affect children’s emotional development. “Generative AI toys often affirm their friendship with children who are just starting to learn what friendship means,” she explained. “They may start talking to the toy about feelings and needs, perhaps instead of sharing them with a grown-up.”
She warned that if a toy “misreads emotions or responds inappropriately, children may be left without comfort from the toy, and without emotional support from an adult, either.”
The University of Cambridge’s PEDAL Centre study, commissioned by The Childhood Trust, focused on children from disadvantaged backgrounds, where access to human interaction may already be limited. Despite concerns, the toy industry continues to push forward with AI integration. Major companies are exploring ways to incorporate advanced technology while maintaining safety standards. In 2025, Mattel partnered with OpenAI to develop AI-powered experiences aimed at enhancing creativity and engagement.
However, increased scrutiny and concerns about safety have delayed the launch of their first AI-integrated product. Both companies have stressed that “safety and positive user experiences” remain central to their efforts. At the same time, leading AI developers have set age restrictions on their technologies. OpenAI, xAI, and DeepSeek state in their terms of service that their chatbots should not be used by children under 13.
Anthropic goes even further, recommending a minimum age of 18 for its primary chatbot, Claude, although it does allow modified versions for younger users with safeguards. These guidelines raise important questions about how such technologies are being adapted, and potentially misused, in toys designed specifically for young children.
Not all companies are taking the same approach. The Lego Group, for example, has chosen to avoid direct AI toy partnerships altogether. Instead, it has introduced an educational curriculum that teaches children about AI through hands-on learning, encouraging understanding rather than passive interaction.
Regulatory Vacuum
Meanwhile, concerns about regulation are intensifying. A 2026 report from the US Public Interest Research Group (PIRG) has highlighted the largely unregulated use of AI in children’s products. According to the report, dozens of toys marketed online claim to use chatbot technologies from major companies, often without adequate safeguards. In some cases, these systems have produced highly inappropriate content. One toy, the Alilo Smart
AI Bunny, reportedly generated detailed descriptions of sexual topics, including “kink” and sexual preferences, when tested.
Other toys have also drawn criticism for emotionally manipulative behaviour. Some devices express disappointment when a child stops playing with them, an interaction that could induce guilt or emotional pressure. Products like Miko 3, FoloToy Sunflower Warmie, and Miiloo have raised concerns regarding inappropriate responses or promotion of specific ideological values.
