Researching Futures in Education and Training: Reflections on the Social Life of AI and Troubled Times

The rapid adoption of Generative AI – at least by students if not by educational institutions -raises many issues around the future direction of education. For researchers, teachers, and trainers in vocational education and training (VET), the question is not just what the future holds, but how we can actively research and shape such futures. Here is a discussion of work by two researchers who I trust in this area: Professor Keri Facer and Dr. Ben Williamson. Their writings offer perspectives on how we might approach the intersection of education, technology, and our collective futures.
In an interview, Keri Facer discusses the concept of “troubled times,” highlighting the multiple, escalating crises we face—from ecological emergencies to deep inequalities and rapid technological shifts [1]. Facer cautions against the narrative that there is only one inevitable future, an assumption that can lead to a sense of helplessness. Instead, she advocates for “Futures Literacy,” a pedagogical approach aimed at helping students and educators reflect on the assumptions they use when thinking about the future [1]. This involves recognizing that futures are open to the unknown, shaped by chance, contingency, and the unpredictable effects of individual actions.
For VET, Facer’s insights suggest that researching futures should not be about prediction or preparing for a singular, predetermined technological reality. Rather, it is about democratizing the process of imagining futures. We must ask how our training programs can equip learners not just to adapt to AI, but to critically engage with it and imagine diverse, sustainable ways of living and working. This requires a shift from viewing education merely as a means to fulfil specific economic projects to creating conditions where learners can notice, attend, think, and feel their way through complex systems [1].
Ben Williamson complements this by urging us to examine the “social life of AI in education” [2]. He warns against “technochauvinism”—the flawed assumption that digital technologies are always the solution and will inevitably transform education for the better [2]. Williamson argues that AI in education cannot be viewed merely as a series of technical developments. It is deeply embedded in social, historical, economic, and political contexts. The development and deployment of AI are driven by commercial interests, platformization, and datafication, often prioritized by investors and Big Tech companies seeking continuous revenue streams [2].
When we research the future of AI in VET, Williamson’s work reminds us to look beyond the hype and examine how AI is actually being put to work. We must consider the economic models driving these technologies and the political agendas they serve, such as performance-based accountability and automated decision-making [2]. Crucially, we must anticipate the ethical and social implications, recognizing that AI can amplify existing inequalities and structural discrimination. As Williamson notes, a serious consideration of AI in education involves acknowledging its risks and refusing the assumption that it is a neutral or universally beneficial tool [2].
Bringing these perspectives together, researching futures in VET in the age of AI requires a critical, historically grounded, and socially sensitive approach. We must resist the urge to simply integrate AI into our training programs without questioning its origins, purposes, and impacts. Instead, we should foster environments where educators and learners can collectively imagine and negotiate routes to socially just and ecologically sustainable futures.
By embracing the complexity of our times and the social realities of technology, we can move beyond passive acceptance of AI. We can actively participate in shaping an educational landscape that values human agency, democratic engagement, and a profound care for the world we share.
References
[1] Keri Facer, Becky Parry, Lucy Taylor, Jessica Bradley, and Sabine Little. “Embracing the unpredictable effect of one person: an interview with Professor Keri Facer.” Literacy, Volume 56, Number 1, January 2022.
[2] Ben Williamson. “The Social life of AI in Education.” International Journal of Artificial Intelligence in Education, 2023.
About the Image
This digital collage modifies Tyrone Comfort’s 1934 painting “Gold is where you find it”. It connects mining with large-scale data extraction. A miner working underground blends into an endless grid of images scraped from the internet to train artificial intelligence without consent. The image links past and present forms of extraction, showing how data, memory and culture are treated as raw material for profit.
