The revolution will not be generated

Ai generated image of AI learning

Such is the extent of public discussion about Generative Artificial Intelligence (AI), its possibilities, and its perils, that it is easy to forget that two years ago, few people had even heard of ChatGPT.

Its release, in November 2022, prompted intense interest in how AI might reshape numerous sectors, including education. Predictions of an AI revolution in teaching and learning abounded, but so far, there is little evidence of any real transformation.

In fact, it is believed that fewer than one fifth of students use AI at school, while most teachers avoid it. Although it is still early days, uptake has been piecemeal and unsystematic. In higher education, students appear to be AI’s ‘power users’, while utilisation of the technology by staff is generally quite limited.

This month, the University of Cambridge will host a Generative AI in Education Conference, which seeks to examine the education sector’s response to Generative AI’s emergence – or lack of it – and how it might take advantage of this new technology ethically and safely.

Conference chair, Dr Steve Watson, from the University’s  Faculty of Education, has been studying AI in education since ChatGPT arrived on the scene. He argues that the conversation around it needs to change. “There is has been a lot of talk about an education revolution, but it’s just not happening as much as the hype suggests,” Watson said. “Progress has been limited by concerns about AI safety and about how young people should use it. Most educators I speak to are curious about AI, but many don’t know where to start with it, so they just steer clear. Our conference will begin with a simple question: How do we understand these challenges?”

"Half of the media coverage presents generative AI as a kind of fully automated luxury communism; the other half presents it as if we’ve invented Skynet."

Utopian and dystopian automated schools

Proponents claim that AI could speed up lesson planning, reduce teacher workloads, and support what many think is a much needed re-emphasis on critical thinking and creativity in the classroom. Yet even in regions like the US and Europe, where rapid change might be expected, large-scale adoption has not occurred.

A recent study by the RAND Corporation, for example, found that just 18% of K-12 teachers in the US were using AI professionally. 44% had heard of it but not used it; 9% were it existed. Another survey, commissioned by the EdTech company GoStudent – whose claims should perhaps be treated cautiously given its interests – similarly found that in the UK, only about 20% of students were using AI, and/or virtual and augmented reality, at school.

Watson believes that much of the reluctance stems from uncertainty about what AI is for, partly because ChatGPT is often erroneously portrayed as a synthetic consciousness. “Half of the media coverage presents generative AI as a total solution, a kind of fully automated luxury communism; the other half presents it as if we’ve invented Skynet and it’s an existential threat to humanity,” he said. “In education, this leads to exaggerated claims about robots replacing teachers, or panic about cheating and misinformation.”

“It’s not a consciousness; it’s a tool for working with contextualised meaning. It has a capability of transforming text from one form to another while preserving the meaning. AI excels at assistive functions like translation, summarising texts, acting as a writing assistant, or breaking down complex ideas. Because it hasn’t been understood that way, many educators haven’t explored its potential.”

Not everyone has missed the point: younger users, especially university students, have been quicker to embrace AI. In June 2024, a report by Harvard University undergraduates found that 90% of their peers regularly used Generative AI. This is one, student-led study, but it echoes other, less formal findings, including at Cambridge.

“Generally, the students I work with use AI in very sophisticated ways while their professors are still at kindergarten level,” Watson said. “It’s exciting that students are leading this, but they also lack a framework for articulating how they are using it. While many academics’ understanding is based on limited engagement with the technology – I am also aware that there is small but significant number who think it should simply be unplugged for good.”

"If teachers and academics could spend just a morning exploring how this technology actually works, they would have a completely different understanding by the afternoon."

Students in library

This is a challenge the Cambridge conference seeks to begin to resolve. On 16 October, experts from various fields will convene to examine how AI can be thoroughly integrated into education with both energy and care.

Watson suggests that an entirely different approach is needed to that which has dominated to date. In particular, he thinks that AI’s potential uses are too diverse and unpredictable to be assessed using the large-scale trials that typically guide education policy. Instead, he recommends more context-specific research, in which academics and teachers collaborate on design-based or action research projects exploring AI’s potential value within their own settings.

This would mark a significant departure from the current preoccupation with EdTech entrepreneurs developing AI ‘solutions and governments devising catch-all regulation. Watson says such policies do not fully reflect the sophisticated and creative ways in which the technology is being used in educational contexts.

Furthermore, they do not reflect the fundamental need for the continuity of educational practices and decision-making processes. He proposes that policy actors and researchers should be generalising from the specific, extracting principles about how AI can enhance education based on local experience. This is notoriously difficult to do, but a methodology known as ‘analytic generalisation’, which involves systematically considering theoretical explanations for the outcomes of context-based research and testing their broader applicability, could, he thinks, be the answer.

In short, if there is an AI education revolution in waiting, it will rely less on educators following top-down policy, than on policy and guidance evolving in response to educators’ emergent experience and practice. It will also require academics, policymakers and educators to work together, supported by scholarly insights from a range of disciplines, such as sociology, psychology, philosophy and linguistics – not just technology and education.

The Cambridge conference will therefore provide a platform for researchers, educators and industry professionals to share their insights. The two-day event will feature discussions on the safe and effective use of AI, ethics, regulatory compliance, and how to build AI literacy across the sector. Keynote speakers include Professor Wayne Holmes (UCL), an expert on the ethical and social justice implications of AI; Professor Mairéad Pratschke, a leading thinker in digital education innovation from the University of Manchester; Professor Jean-Gabriel Ganascia, an engineer and philosopher from the Sorbonne University; and Professor Rupert Wegerif, from the University of Cambridge, who is a leading thinker on educational technology and educational dialogue.

 “If teachers and academics were able to spend just a morning exploring how this technology actually works, they would have a completely different understanding by the afternoon,” Watson added. “That’s why I think that a context-first approach will overcome some of the hesitancy we have seen. Solving that problem is likely to be a much more human process than we tend to assume.”

The Cambridge Generative AI in Education Conference 2024 runs from 16 to 17 October. More details can be found here.

Images: Lead and second images: AI-generated images of imaginary AI classrooms created with the assistance of DALL·E 3 and credited in line with Open AI’s content policy. Third image Image by Andrew Tan from Pixabay