Is Generative AI damaging Learning?

Its fairly obvious that scepticism about the benefit of Generative AI in education is growing. And yesterday a surprising name, Ethan Mollick, added his voice to the list. Mollick is a researcher in entrepreneurship and innovation, and how to teach people to become more effective leaders and innovators. But he is better known for his more recent work on AI, and especially how it affects education and work. Through his newsletter, One Useful Thing, he has led the way in prompt engineering for education and has generally been an cheerleader for the potential of Generative AI.
But yesterday, in an edition of his newsletter entitled 'Against "Brain Damage"' he talked about a series of studies and reports about the impact of Gen AI on learning. He referred to a paper from the MIT Media Lab (with authors from other institutions as well), titled “Your Brain on ChatGPT.” The actual study, he says, is much less dramatic than the extensive press coverage. "It involved a small group of college students who were assigned to write essays alone, with Google, or with ChatGPT (and no other tools). The students who used ChatGPT were less engaged and remembered less about their essays than the group without AI. Four months later, nine of the ChatGPT users were asked to write the essay again without ChatGPT, and they performed worse than those who had not used AI initially (though were required to use AI in the new experiment) and showed less EEG activity when writing."
He want on to say that we are increasingly outsourcing our thinking to Generative AI and asked how do we use AI to help, rather than hurt us? The crux of the problem, he says is if you are trying to learn or synthesize new knowledge and you outsource your thinking to the AI instead of doing the work yourself, then you will miss the opportunity to learn. And he provided details of other tests with students providing si9milar results.
But the problem does not lay with the students but rather the technology application "The AI is trained to be helpful and answer questions for you. Like the students, you may just want to get AI guidance on how to approach your homework, but it will often just give you the answer instead. As the MIT Media Lab study showed, this short-circuits the (sometimes unpleasant) mental effort that creates learning. The problem is not just cheating, though AI certainly makes that easier. The problem is that even honest attempts to use AI for help can backfire because the default mode of AI is to do the work for you, not with you."
But Mollick believes that we have increasing evidence that, when used with teacher guidance and good prompting based on sound pedagogical principles, AI can greatly improve learning outcomes. However the examples he provides of studies suggesting such outcomes have been heavily contested in discussions on LinkedIn. In fact there is a wider debate about how such research should be designed and how they should be interpreted. Indeed Mollick himself concedes that "no study is perfect" referring to a study where the control was no intervention at all, rendering it impossible to fully isolate the effects of AI, though he says they do try to do so.
At the end of the day Mollick says that although the challenges in education from AI are very real, "there is reason to hope that education will be able to adjust to AI in ways that help, and not hurt, our ability to think. That will involve instructor guidance, well-built prompts, and careful choices about when to use AI and when it should be avoided."
Personally I think it is not just the issue that General AI tries to provide answers but that the the providers and especially OpenAI who developed ChatGPT have encouraged education to adapt to the technology ignoring pedagogy in the process. And I cannot really see a future which requires teachers to become experts in prompt engineering to overcome the deficits of Generative AI models.
About the Image
"The image is inspired by the widely cited 2021 paper titled, 'On the Dangers of Stochastic Parrots: Can Language Models Be Too Big' by Emily M Bender, Timnit Gebru, Angelina McMillan-Major and Shmargaret Shmitchell. The 'stochastic parrot' is a metaphor for large language models (like ChatGPT) that generate text by statistically predicting the next word based on large datasets, rather than by understanding the meaning, truth or context of the user prompts. The illustration's glitchy parrots represent this idea, showing how their outputs are often uneven and varied. Some of the parrots are leashed, symbolising attempts to maintain AI under control, while other respond to human commands more autonomously. The image reflects the emerging, often improvised ways humans and AI are learning to work together. This image was selected as a winner in the Digital Dialogues Art Competition, which was run in partnership with the ESRC Centre for Digital Futures at Work Research Centre (Digit) and supported by the UKRI ESRC.