The AI Act is Coming: Why Institutions May Need to “Change Everything”

The European Union’s Artificial Intelligence Act came into force in August 2024 and is now steadily moving from the realm of policy abstraction into the daily reality of educational institutions. As its remaining provisions take effect over the next six to 36 months, education providers are facing regulations that could force a fundamental rethink of how they use AI.
Thomas Jørgensen, director of policy coordination and foresight at the European University Association (EUA), recently warned that many European institutions may have to "change everything" about their AI practices to comply with the new rules [1]. While his comments were directed at universities, the implications for VET - a sector focused on practical assessment and workplace readiness - are just as profound, if not more so.
One of the most pressing concerns highlighted by Jørgensen is the growing, often informal, practice of educators using large language models (LLMs) like ChatGPT to assess student work. Under the AI Act, AI systems used for student assessment are classified as "high-risk" . This places them in the same regulatory category as AI used for hiring decisions or credit scoring.
"If you do assessment with AI, according to the AI act, there’s a whole range of requirements that you need to meet, both as a user and as a provider," Jørgensen noted. "And I think it’s fair to say that the big large language models do not fulfil the criteria because it has to do with transparency and the data that’s being trained on it".
AI and Assessment in VET
For VET teachers, who often manage large cohorts and complex, portfolio-based assessments, the temptation to use generative AI to speed up grading or generate feedback is understandable. However, Jørgensen’s warning is stark: "For teachers using ChatGPT for assessment, there is a risk that it is illegal". This is not merely a theoretical risk. It points to a critical gap in institutional oversight, where individual teachers adopting tools to manage their workload might inadvertently expose their institutions to significant legal and ethical liabilities.
The compliance burden does not only fall on informal, "shadow IT" usage. Institutions that have proactively set up their own formal AI tools will also face new requirements. In 2022 and 2023, many educational institutions responded to the generative AI boom by establishing task forces and drafting guidelines for responsible use. However, as Jørgensen points out, "The next step will be when the AI act really comes into force and the guidelines from the AI Office land. That’s going to be a big challenge".
The risk is that the pragmatic, locally developed strategies VET institutions have relied on so far may not withstand the scrutiny of the AI Act’s high-risk provisions. The requirement to ensure transparency, data privacy, and algorithmic fairness in assessment tools means that institutions will need to audit their existing systems and potentially discard those that cannot meet the standard.
The Threat to Intellectual Diversity
Beyond compliance, there is a broader philosophical concern regarding the tools themselves. A recent EUA report spearheaded by Jørgensen highlighted that the main source of data for training many commercial AI models comes overwhelmingly from the US. For a sector like VET, which is deeply rooted in local industrial contexts, regional regulations, and specific European workplace cultures, relying on US-trained models presents a significant risk.
Jørgensen warns that widespread reliance on these uniform models risks eroding intellectual diversity. "Things become bland and uniform," he observed. "The idea of cooperation, because you come up with something different, goes away, because you ask the same model, trained on American data" . For VET, this could mean that the context-specific knowledge required for European trades and professions becomes diluted by generalized, culturally distant AI outputs.
What VET Leaders Need to Do Now
The delayed implementation of key provisions relating to high-risk systems - pushed back by the European Parliament in March to give regulators and organizations more time - offers a brief window of opportunity. VET leaders and managers must use this time proactively.
First, there needs to be an audit of how AI is currently being used for assessment, both formally and informally, across all programmes. Clear policies must be established to prevent the ad-hoc use of non-compliant commercial LLMs for grading.
Second, the conversation around data privacy, particularly in sensitive areas like health and social care training, needs to be accelerated. As Jørgensen noted regarding the health space, "we need to have policies on this because otherwise there is a risk that we don’t comply. But that is a conversation that’s just starting".
The AI Act is no longer a distant horizon; it is the new operational reality. VET institutions need to move beyond simply exploring what AI can do, and define what it should and legally may do in classrooms and workshops.
References
[1] Asaf, S. (2026). Looming EU AI act could force universities to ‘change everything’. Times Higher Education. https://www.timeshighereducation.com/news/looming-eu-ai-act-could-force-universities-change-everything
About the Image
This print speaks to the ways OpenAI (and others) are forcing genAI into our lives, families and homes. It frames genAI as a power(ful) tool to control and influence behaviour. The piece is part of a larger art series titled “Power Tools: A critique of genAI and its toolmen".
