The Authenticity Crisis: Navigating AI and Assessment in Vocational Education and Training

Although the discussion has become quieter, the concerns over AI and assessment are still there. While much of the debate has centred on academic integrity in schools and universities, the unique landscape of Vocational Education and Training (VET) presents a different, and arguably more complex, set of challenges and opportunities. For VET, a field fundamentally concerned with the development of real-world competence, the rise of AI in assessment is not just a technical question; it is a direct challenge to the very meaning of authenticity in learning and practice.
Arguably, the core purpose of vocational assessment has always been to bridge the gap between academic knowledge and the demands of the workplace [1]. However, research shows that even before the widespread adoption of AI, this was a difficult balance to strike. A recent study on vocational education in Chile, for instance, found that while higher-level VET aimed for knowledge transfer, it often relied on closed-response questions that favoured rote memorisation over deep understanding. Both secondary and higher VET systems showed significant gaps in achieving the principles of truly authentic assessment, which demand realism, cognitive challenge, and the ability to apply skills in novel situations [1]. This existing authenticity crisis is the critical context into which AI-driven assessment tools are now being introduced, threatening to widen the gap between what can be easily measured and what truly constitutes vocational mastery.
Professor Rose Luckin, a leading voice in AI and education, offers a framework for understanding this dilemma. She argues that today’s AI tutors, for all their sophistication, cater to a very narrow slice of the human learning repertoire—perhaps as little as 16 percent [2]. They excel at exposition, rehearsal, and tutorial dialogue, but they miss the complex, messy, and deeply human processes that constitute the other 84 percent of learning. This is the realm of learning by doing, of building and collaborating, of developing the tacit knowledge that comes from authentic problem-solving in a workshop, a studio, or a clinical setting. In essence, the heart of effective VET lies precisely in that other 84 percent that current AI struggles to engage with, let alone assess.
This limitation presents a profound risk. If we allow technology to define our assessment goals, we risk promoting a dangerously incomplete model of learning. As the UNESCO report on AI and education warns, preparing learners to be merely dependent on AI by focusing only on what machines can manage will leave them without the skills to navigate complexity, collaborate with others, or adapt their knowledge to new situations [2]. This is not a call to reject AI, but a warning against mistaking one tool for the entire orchestra. An AI tutor may be able to reinforce foundational knowledge, but it cannot replace the rich, multi-faceted learning environment essential for developing true vocational competence.
The logical endpoint of uncritical AI adoption in assessment is a scenario that has been dubbed the “dead web”: a closed loop where “machines writing content that machines then read, which produce something another machine reads, and then the machine generates the diploma, the cover letter, the résumé, which another AI scores, and then the AI decides whether or not to offer you the job” [3]. In such a system, the human is dangerously absent from the loop, and the very purpose of education is called into question. The antidote to this dystopian vision is not a rejection of technology, but a focus on what educational consultant Corrie Bergeron calls “cognitive fidelity” over physical fidelity. He advocates the increased use of simulations for assessment. It does not matter if a simulation is photorealistic, he says; what matters is that the system reacts in ways that are cognitively and emotionally realistic, activating the learner’s limbic system and creating a genuine learning experience [3]. A well-designed simulation, embedded within a human-led pedagogical framework of debriefing and reflection, can provide a powerful form of authentic assessment that AI can support but not supplant.
Beyond the question of authenticity, the rush to implement AI in assessment brings with it a host of ethical dilemmas that the educational community is only beginning to grapple with. A recent workshop at the University of Cambridge highlighted the growing concern that excessive automation risks eroding the human relationships and skills that make education meaningful [4]. The drive for efficiency can lead to a dehumanization of education, reducing opportunities for the personalized feedback and mentoring that are crucial for student development. This is compounded by the risk of normalizing digital surveillance through AI-powered monitoring and proctoring systems, creating educational environments that prioritize data collection over human flourishing. The long-term psychological effects of this constant measurement on students remain largely unknown, but the potential for harm is significant [4].
Furthermore, the integration of AI in assessment raises profound questions of equity and fairness. Unequal access to advanced AI tools risks creating new forms of educational inequality, reinforcing existing disparities rather than addressing them [4]. As Abeba Birhane notes, because AI models are trained on historical data, they are inherently backward-looking and have a tendency to encode and perpetuate societal biases, particularly against historically marginalized groups [2]. The promise of objective, data-driven evaluation can easily mask a reality of automated discrimination.
Navigating this complex terrain requires a new form of leadership from within the VET community. It demands that we move beyond tinkering at the edges and, as data strategist Katy Gooblar suggests, start by asking “why” [5]. What is the core purpose of our assessment, and how can AI serve as an enabler of that purpose, rather than its driver? This is a moment for critical pedagogy, for what Paulo Freire called the process of enabling students to become the masters of their own thinking [2]. It requires educators to have the confidence to challenge the outputs of AI systems, to question their integrity, and to maintain a focus on the human-centred skills that will remain vital in an AI-integrated world.
This is not a time for binary decisions about AI adoption, but for iterative learning, experimentation, and correction [4]. The challenges presented by AI in assessment are an opportunity to redefine and reopen fundamental questions about the goals of vocational education itself. Rather than simply adapting to technological change, we have a responsibility to actively shape how these systems are designed and deployed, ensuring they are anchored in educational values. The future of VET depends not on the sophistication of technology, but on the strength of pedagogy and a commitment to the development of authentic human competence.
References
[1] Villarroel, V., Melipillán, R., Santana, J., & Aguirre, D. (2024). How authentic are assessments in vocational education? An analysis from Chilean teachers, students, and examinations. Frontiers in Education, 9. https://www.frontiersin.org/journals/education/articles/10.3389/feduc.2024.1308688/full
[2] UNESCO. (2025). AI and the future of education: Disruptions, dilemmas and directions. Paris: UNESCO.
[3] Eaton, L., & Bergeron, C. (2025). Interview on AI and educational simulations.
[4] Ennion, M., & Bobrovsky, I. (2025). AI and Assessment: Navigating Ethical Implementation and Future Possibilities. Workshop Report, University of Cambridge.
[5] Gooblar, K., & Webb, M. (2025). Leading in the age of AI. Beyond the Technology Podcast.
About the Image
The image shows a tree with an ornate picture frame leaning against the trunk. Inside the frame, the photo has been manipulated to look glitchy. This is a photo I took myself in my neighbourhood - someone had left the picture frame leaning against a tree outside their house and I noticed it and photographed it. My inspiration for creating the image was to think about how AI machine vision already datafies the natural world and generates glitchy images of real living things such as trees, distorting our perception of them, yet AI visual generators present their images as if they are somehow more creative or fancier than those we can take with our smartphones (the fancy frame inspired this idea). I used Canva to glitch the photo and edited the glitched version to fit inside the frame in the original photo of the tree trunk.
