Making Sense of the European Approach to AI in Education

It is not surprising that there is growing confusion and even open anxiety about Artificial Intelligence in education. The rapid adoption of Generative AI, at least by students if not always by educational institutions, has raised basic questions about the future direction of teaching and learning. But while the headlines are often dominated by the latest releases from a limited number of large, mainly American technology companies, a different kind of development has been quietly unfolding closer to home. Over the past decade, the European Union has been constructing a comprehensive approach to AI policy, culminating in a framework that will significantly impact schools, colleges, and universities across the continent.
A recent Science for Policy Brief from the European Commission’s Joint Research Centre attempts to unpack this path to what it calls "trust and excellence" . The document traces the journey from early strategic exploration in 2018 to the current maturation phase, marked by the entry into force of the AI Act. The core argument is that the EU is not merely trying to regulate a new technology, but is actively attempting to build an ecosystem where technological innovation and competitiveness are balanced with the protection of fundamental rights, societal values, and democratic integrity. For education, this is a crucial distinction. It suggests a move away from the technochauvinist assumption that digital technologies will inevitably transform education for the better, towards a recognition that AI is deeply embedded in social and political contexts that require active governance.
The JRC report divides this policy journey into three phases. The exploration phase laid the groundwork, establishing the baseline assumption that Europe needs to simultaneously build an ecosystem of excellence to mobilise resources and an ecosystem of trust to provide citizens with confidence. The transition phase turned these ambitions into actions, driven by the surge in generative AI which transformed the political debate and accelerated the need for concrete regulation. We are now in the maturation phase, characterised by three main building blocks: the AI Act to create conditions for trust, the AI Continent Action Plan to strengthen capabilities, and the Apply AI Strategy to encourage adoption .
What does this mean for vocational education and training, and for teachers more broadly? The most immediate impact comes from the AI Act itself, which takes a risk-based approach to regulation. Crucially, the deployment of AI systems in education is identified as an area of high risk . This means that AI tools used to determine access to education, assess students, or monitor behaviour will be subject to strict requirements regarding transparency, human oversight, and data quality. Furthermore, certain applications, such as emotion recognition systems in educational settings, are explicitly prohibited. This regulatory architecture places significant new responsibilities on educational institutions, which are now considered deployers of high-risk systems under the Act.
However, the JRC report acknowledges a significant challenge ahead: the gap between legislation and practice . The obligations introduced by the AI Act are being applied for the first time, and their real-world effect on providers, deployers, and users is still being tested. For teachers, the challenge is not just compliance, but understanding how to integrate these tools pedagogically while maintaining critical oversight. As the updated European Commission ethical guidelines for educators highlight, there is a growing need for ethical and critical AI literacy among teaching staff . These guidelines emphasise that educators must be equipped to make informed, context-based ethical decisions, taking advantage of AI's benefits while identifying and managing potential risks.
The broader implication of the European approach is that we must resist the urge to view AI simply as an inevitable force that education must adapt to. The JRC document reflects a conviction that excellence and trust are not competing objectives, but that realising the former requires the latter to already be in place . For researchers and practitioners in education, this means actively participating in shaping how AI is used in our institutions. We need to look beyond the marketing hype and examine the economic models and political agendas driving these technologies. By engaging critically with the evolving European policy framework, educators can help ensure that the integration of AI supports, rather than undermines, a socially just and pedagogically sound educational landscape.
References
Rodriguez Müller, P., André, A., Tangi, L., Jugel, L., Schade, S., et al., The European approach to artificial intelligence policymaking: Unpacking the path to trust and excellence, European Commission: JRC, Seville, 2026, JRC146313. https://publications.jrc.ec.europa.eu/repository/handle/JRC146313
AI Act — Annex III: High-Risk AI Systems including education and vocational training https://artificialintelligenceact.eu/annex/3/
EC Ethical Guidelines for Educators on AI (updated March 2026 )https://education.ec.europa.eu/focus-topics/digital-education/actions/plan/ethical-guidelines-for-educators-on-using-artificial-intelligence
About the image
This digital collage modifies a 1900 piece to expose the deal proposed by Big Tech. The title refers to the gap between what is presented as an ideal future and what is actually being exchanged. The image points to how corporate narratives hide the real costs of this deal. What is presented as innovation and efficiency comes at the cost of constant data capture, surveillance and environmental damage.
