Explainable disruptions, dilemmas and directions
I read UNESCOs AI and the Future of Education – disruptions, dilemmas and directions whilst travelling to Bremen for the conference and it definitely informed what I said during the plenary panel. It was nice to know I’m not alone in being critical of generative AI and its use in education, it was refreshing to read serious concerns about the energy and water use and a backlash against the big tech companies selling magic AI beans. The essays and articles are meant to be provocative and there are certainly a few ideas I’m still battling with.
One of the outcomes of our conference discussions was the need to bridge theory, policy and practice (and in the case of VET, industry). Unless you’re an education theory geek (and yes, many of us at AI Pioneers are) with hours to kill reading and then further researching, well, it’s not an easy read. I’ll be honest, there are a couple of terms I had to look up and this is my area.
There’s been some work recently on explainable AI, trying to demystify the AI black-box – so in that vein I’m going to attempt, with a little support from DeepSeek, some explainable theory starting with trying to break down this massive UNESCO report into what is hopefully useful and usable info for practitioners. Here are the first 7 – more to follow!
Page 20 – Listening in the cracks: A conversation with Bâyò Akómoláfé
This piece suggests we stop trying to control AI and instead see it as a force that shakes up our old ways of thinking about learning. The author proposes that intelligence isn’t just a human thing, but exists in the connections between people, technology, and the world. He encourages us to sit with the confusion and discomfort AI brings, as listening in the cracks may open up new ways of learning and being together.
Page 30: Future of education: Going beyond the ‘intelligence’ paradigm, Bing Song
This essay argues that our fear of AI is really a fear of what we, as humans, have created. Instead of just focusing on knowledge and skills (“intelligence”), education should also teach “wisdom.” This means helping students reflect on what it means to be human, understand themselves, and relate to the world, which is something AI cannot do.
Page 34: Water in the historical present and far-reaching future for AI in education, Mary Rice and Joaquín T. Arguello de Jesús
Using water as a metaphor, this piece compares good education to a slow river that shapes a rock over time. It’s quite poignant and emotive. It warns that AI, in its rush for fast answers, uses huge amounts of water and energy, which is destructive to the environment and to future generations. The authors urge us to think about the real-world cost of AI and to make sure it doesn’t repeat past mistakes of exploiting people and the planet.
Page 38: Rethinking Education in the Age of Artificial Intelligence, Andreas Horn
This article offers a clear-sighted view from the tech industry. It says that since AI is already here, we need a smart plan to use it well. This means training teachers first, using AI for helpful tasks like personalised practice, and teaching all students how AI works. The goal is to use technology to support learning, not replace the human parts of teaching.
Page 41: We do not have to accept AI (much less GenAI) as inevitable in education, Emily M. Bender
I gave a little cheer reading this one – pedagogy before profit! This essay is a strong critique, arguing that the AI behind tools like ChatGPT doesn’t actually understand anything; it’s just excellent at mimicking human language patterns. Using it in classrooms is a waste of money, disrespects the hard work of teachers, and turns learning into just getting answers from a machine instead of building knowledge together.
Page 46: Contested imaginaries: Reclaiming higher education in the age of AI, Markus Deimann and Robert Farrow
This piece explains that the future of AI in education depends on the “stories” or visions we believe in, do we frame AI as a miracle solution to a tool for control? The authors warn against stories that push for no rules and let tech companies take over. They believe we should actively shape a future for AI based on values like fairness, justice, and care.
Page 53: The incomputable classroom: The limits and dangers of AI in education, Abeba Birhane
This article argues that education is an inherently relational, ethical, and political process that defies computation and datafication. It involves trust, care, and critical thinking. AI, which works by looking ‘backwards’ finding patterns in old data, can’t grasp this complexity and often reinforces existing biases. The author calls for pushing back against using AI in classrooms until independent oversight and safeguards are in place.
Featured image Ada Jušić & Eleonora Lima (KCL) / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/