Good critical and sceptical work on AI in education
I’ve commented before on the depth of division in commentary and research on the use of AI in education since the release of ChatGPT and subsequent applications based on Large Language Models. As the MIT Technology review has reported, “Los Angeles Unified, the second-largest school district in the US, immediately blocked access to OpenAI’s website from its schools’ network” and “by January, school districts across the English-speaking world had started banning the software, from Washington, New York, Alabama, and Virginia in the United States to Queensland and New South Wales in Australia.” But then continued, “many teachers now believe, ChatGPT could actually help make education better.
Advanced chatbots could be used as powerful classroom aids that make lessons more interactive, teach students media literacy, generate personalized lesson plans, save teachers time on admin, and more.”
But rather than take sides in a polarised debate. Ben Williamson, who researches and writes about education, digital tech, data and policy at the University of Edinburgh, believes we need to develop “Good critical and sceptical work on AI in education.’ In a series of toots (the Mastodon nomenclature for Tweets) on the Mastodon social network put forward the following ideas for research into AI in education.
Is AI in education really doing what it claims? Do LLM-enabled chatbots improve learning? Do personalized learning algorithms actually personalize, or just cluster by historical patterns? Is it even “AI” or just some shitty stats?
What’s the political economy of AI in education? Even if LLM chatbots in EdTech are great, how does that link with wider digital economy developments? What policy enablers are in place to facilitate AI in education? What policy-influencing networks are forming around AIED? Why does it get so much funding, in which geographical regions, and from which sources?
What’s the science behind AI in education? AI and education have a 60-year history, taking in cybernetics, cognitivism and computing, then learning science, learning analytics, and education data science, with doses of behaviourism and nudge theory along the way, and now machine learning and neural networks – this is a hefty accumulation demanding much better understanding.
What kind of infrastructuring of education does AI in education require? You put LLMs into EdTech vis APIs then you are building on an infrastructure stack to run your platform. That puts schools on the stack too. What are the implications, long-term, of these Big Tech lock-ins? Will schools be governed not just by EdTech but by Big Tech AI vendors and their APIs?
What are the rights, justice, ethics and regulatory implications of AI in education? Can EdTech be designed for justice? Could algorithms be repurposed for reparative projects rather than discriminatory outcomes? Have AIED ethics frameworks been compromised? Is there scope for more democratic participation in building AI for education products? Can we be hopeful of better things from this technically remarkable but socially troubling tech?
“Just some thoughts to work on…”, he concluded. These seem a pretty good starting point, not just for Higher Education, but for those of working on AI and Vocational Education and Training and in Adult Education, as we are doing in the European AI PIoneers Project.