Trained to stop learning?

A new report from the UK-based higher education think tank Wonkhe deepens the already fraught debate about AI and academic integrity. Titled Trained to stop learning? , the report’s central finding is both provocative and compelling: current university AI policies are not catching the cheats, but are instead “consistently punishing the most conscientious students.” While based on a survey of UK students, the research exposes a fundamental disconnect between institutional policy and student reality that has profound implications for educators, trainers, and policymakers across Europe.
The Wonkhe report reveals that while universities are scrambling to implement AI detection software and revise rules around academic integrity, many students are engaged in a far more sophisticated ethical debate. They are not simply asking, “Will I get caught?” but are developing their own principles about what constitutes meaningful learning. They draw sharp distinctions between using AI for “structural scaffolding” versus content generation, or as a “junior researcher” versus a final author. The report’s most powerful conclusion is that the obsession with policing AI use is a distraction from a more fundamental question: what is assessment for? Is it to measure genuine understanding, or simply to reward the production of text?
From a mainland European perspective, the report’s findings are an urgent call for a different conversation, particularly in three key areas.
First, the report’s critique of AI detection tools is amplified in a multilingual Europe. Research has repeatedly shown that AI detection software is notoriously unreliable, with high rates of false positives . Crucially, these tools are significantly more likely to flag text written by non-native English speakers as being AI-generated . For a continent where hundreds of thousands of students are studying in a second or third language, relying on these flawed systems is not just bad practice; it is an act of systemic discrimination. It creates a situation where the most linguistically diverse student bodies are the most vulnerable to false accusations of academic misconduct, a clear violation of the principles of equity and inclusion that European education systems strive for.
Second, the European Union’s own regulatory framework, the AI Act, is on a collision course with the approach of many institutions. The Act classifies AI systems used “to evaluate students in educational and vocational training institutions” as “high-risk” . This designation imposes stringent obligations on providers, including robust risk management, high levels of accuracy, and detailed technical documentation to prove compliance . It is highly questionable whether the AI detection tools currently on the market, with their known flaws and biases, could ever meet this standard. European institutions that continue to rely on them are not only penalising their most honest students but are also exposing themselves to significant legal and regulatory risk under a framework designed to protect citizens from the harms of unaccountable AI.
The Wonkhe study found that “visible accountability moments”—such as oral exams or in-person presentations—change how students use AI. When they know they must personally demonstrate their understanding, they use AI to test themselves and check their reasoning, rather than to bypass learning. This aligns perfectly with the principles of competence-based assessment that unperpins most European VET systems. By focusing on the application of skills in real-world contexts, VET is arguably better positioned than traditional essay-based higher education to design assessments that are inherently more resistant to AI misuse. The challenge is not to ban the tool, but to design assessments where the tool is of little use in faking competence.
The report’s finding that disabled students are using AI as a powerful cognitive support tool -often more effective than formal adjustments – resonates with recent OECD work on using AI to support neurodivergent learners in VET . This reframes the discussion entirely. For these students, AI is not a cheating device but a vital accessibility tool. Blanket institutional bans, driven by a fear of academic misconduct, risk harming the very students who have the most to gain and who are using these tools to level the playing field.
The Wonkhe report tells us that students are outrunning their institutions, engaging in ethical work that goes unrecognised and unsupported. We must resist the simplistic and flawed path of AI detection. Instead, we must leverage the strengths of our diverse, multilingual, and competence-focused educational traditions. We must focus on designing authentic assessments that measure what matters, support all learners inclusively, and align with our own legal and ethical standards for artificial intelligence. The question is not how to stop students from using AI, but how to design an education system where they don’t need to.
References
[2] MIT Sloan Management. (n.d.). AI Detectors Don’t Work. Here’s What to Do Instead.
[4] European Parliament. (2024). EU AI Act: first regulation on artificial intelligence. Annex III.
[5] Digital Education Council. (n.d.). EU AI Act: What it means for universities.
About the Image
The ‘POP AI’ series is inspired by the work of pop artists, including Warhol and Lichtenstein, who challenged conventional fine art conventions by drawing on images and texts from popular and commercial culture, such as advertising, celebrity culture and comic strips. The basis of this series is that AI hype is part of contemporary popular and promotional cultures, making bold claims about the transformational benefits of AI in the attempt to insert generative AI services into every part of people’s lives in the interests of profit. Wow was created using Canva’s imagery and editing (non-gen AI) tools.
