Developing new Guidelines for AI in Education in Europe

Report by Graham Attwell and George Bekiaridis
Yesterday we went to the first (online) meeting of the European Digital Education Hub Working Group on revising the (2022) Ethical Guidelines on the use of AI in Education. The group of twenty five educators from different sectors and roles across Europe and working with officials from the European DG Education and Culture are tasked with developing the updated Guidelines by the start of 2026. The present Guidelines date from before the release of generative AI, so there is clear need for an update.
We split into three working groups to brainstorm what was lacking in the previous guidelines and what we should be focusing on in the new release. This was a wide ranging discussion. One concern was that a single set of guidelines could not encompass the breadth of different stakeholders and participants in education at a European level. Many of us were keen to see more support for teachers and trainers in terms of pedagogy. A Professor in Higher Education proposed that Assessment should be a major subject saying that Universities continued to be in chaos due to the widespread use of Generative AI by students. And there was general concern at the level of support fro teachers and trainers. There are frameworks for competence for educators especially the UNESCO Frameworks. But where are the opportunities for professional development to develop competence and confidence in using AI for teaching and learning?
Concern over data privacy and intellectual property were also expressed. And with the AI Act due to come into effect early next year who would be responsible for compliance in schools and colleges?
Participants had differing perspectives on AI ethics due to their unique pedagogical goals, professional contexts, and cultural values. Some emphasized the importance of protecting student data and privacy rights, while others focus on the risks of bias in AI-driven assessments or the moral implications of delegating decision-making to algorithms. Many view AI ethics as a framework for fostering responsible, transparent, and equitable uses of technology in the classroom, whereas some see it more as a set of guidelines that must continually evolve in response to rapid technological advancements. This wide-ranging set of interpretations underscores the complexity of defining “ethical AI,” revealing a rich, ongoing discourse among educators who strive to balance innovation with safeguarding the well-being and interests of learners.
There is a steering group which is putting together a report of our discussions to draw up a plan how we go forward and we are holding a rare face to face meeting in May. Interesting stuff and we look forward to seeing how it all turns out. It is a very different process than the previous traditional EU way of appointing expert groups to develop new policy.
In terms of vocational education and training and adult education, the focus of AI pioneers, we are, I think, the only two from these sectors on the working group. But we will make sure our voice is heard!
About the Image
“Pas(t)imes in the Computer Lab” subverts the “domestic” painting of a woman knitting next to a window to recontextualize her craft as weaving the wires of an early computer. Outside the window are the columns of Nevile’s Court at Cambridge University– an ode to the women of Newnham College that made the code-breaking decryption during World War II possible. The visual subversion of a “pass time” of computing offers a critical reading of the vital labor that underpins AI technology. Overlooked labor is not merely a historical anecdote. It is increasingly accelerated by the rise of AI as the labor of data cleaners, content moderators, and warehouse workers, etc. remains hidden from public view.