New project aims to understand how AIsmarttoys affect disadvantage, development and play

Child and toy bear

Researchers have begun the first systematic study examining how a new generation of toys which use generative AI to hold lifelike conversations with their owners are affecting younger children’s development.

Academics at the University of Cambridge are also asking people who work or volunteer in roles supporting young children (aged 0 to five) and their families to participate in a
brief survey about their perspectives on children’s use of AI in the early years.

The results will feed into the wider Cambridge project, which is exploring how children interact with AI-powered ‘smart’ toys, the emotional relationships that they form with these devices, and how parents, children, and professionals feel about the toys’ emergence. The project will focus on children from disadvantaged backgrounds, examining whether AI toys are likely to reduce or exacerbate opportunity gaps compared with their more privileged peers.

A growing range of smart toys which incorporate Large Language Models like Open AI’s GPT series, are entering the market. Last Christmas, some children may have found themselves excitedly unwrapping toys like Poe, a cuddly AI bear that crafts personalised stories, or Miko, a robotic learning companion.

Still more advanced toys which claim to offer human-like friendship and emotional support are due to become available later in 2025.

Their impact on early childhood development, learning, wellbeing and mental health is, however, little understood. Dr Emily Goodacre, from the Faculty of Education, University of Cambridge, said: “Generative AI has burst on to the scene so rapidly that although there are now numerous toys incorporating this technology, we still know very little about how children are interacting with it.”

"We are particularly interested to know if these toys are widening inequalities, or if they can be harnessed to bridge the opportunity gap."

“This is the first systematic study of preschool children’s experiences and emotional relationships with Generative AI toys, working directly with both children and their parents. Given the digital exclusion often experienced by children from disadvantaged backgrounds, we are particularly interested to know if these toys are widening inequalities, or if they can be harnessed to bridge the opportunity gap.”

The project will be led by Goodacre and Professor Jenny Gibson, both specialists in developmental psychology based at the University of Cambridge’s Faculty of Education. It has been commissioned by The Childhood Trust, a London-based child poverty charity. Yesha Bhagat, the Trust’s Impact and Research Manager, described it has having “the potential to benefit children around the world as the field of Generative AI continues to develop and grow”.

Generative AI toys are often marketed on the basis of their potential to support  children’s health, relationships and education. Miko, for example, comes pre-programmed with learning apps, tells stories and can even run yoga lessons. It is described as connecting and responding “empathetically” to users.

Another example is
Fawn; a talking AI deer whose creators claim to have drawn on expertise in emotional processing, cognitive functions and neurodiversity in its development, to help children “build peaceful and connected relationships with friends and family.” Grok, a cuddly AI toy which has been supported and voiced by the musician Grimes, and bears the same name as the chatbot developed by her sometime partner Elon Musk, is being promoted as an educational toy combining “technology, safety and imagination”.

At present, there is little independent academic evidence to support or refute such claims, particularly when considering children from less advantaged settings. Very little attention has been given, either, to any potential risks these toys may pose and whether they may exacerbate existing inequalities.

The Cambridge study will begin with a review of existing research on AI toys, and use this as the basis of an inquiry for the remainder of the study.

Additional data will come from the survey for early years professionals and volunteers who work with children under five, or their families. The 10-minute survey can be accessed online, and will explore their general attitude towards AI, their opinions about AI toys, and how confident they are using such toys in their work.

The researchers will then build on this evidence through structured observations of children playing with different AI toys, interviews with children, and focus groups with parents, teachers, and early years professionals, to understand the possible risks and benefits.

The project outputs are due in early 2026 and will include recommendations on how Generative AI toys can be used and designed to support childhood development, and how to make them as inclusive as possible.

The likely contents of those recommendations are difficult to predict. “There is just so much we don’t know yet about these toys – their potential, possible risks, whether parents are worried or excited about them, and what we might expect from their future development,” Goodacre said.

“We do know that children are incredible at finding ways to play in almost any given situation. It’s therefore likely that Generative AI is already stimulating new patterns of play, which could lead to some unexpected findings.”

The project, “AI and Early Childhood Development – Exploring the Implications of Non-Human Conversational Agents for the Wellbeing and Mental Health of Disadvantaged Children” is commissioned by The Childhood Trust, with funding from the KPMG Foundation and The Ethos Foundation.

Image: Pezibear, Pixabay.