The Configuration of Passivity and why AI in Education is Facing a Growing Backlash

The initial wave of uncritical enthusiasm for Artificial Intelligence in education is beginning to break in the face of pedagogical reality. A growing scepticism is taking root with teachers, students, and parents increasingly questioning the narrative that generative AI is an unalloyed good for learning. This opposition is not driven by technophobia, but by a mounting body of evidence suggesting that AI, rather than enhancing cognitive development, often acts as a barrier to it.
Recent research provides empirical weight to these concerns. A 2025 study from the MIT Media Lab, which used EEG scans to measure brain activity during essay writing, found that participants using ChatGPT showed reduced neural connectivity in networks associated with memory and creativity [1]. Furthermore, memory retention dropped significantly; users struggled to recall what they had just written [1]. This phenomenon, which researchers term "cognitive debt," suggests that reliance on AI tools may permanently erode capacities in critical thinking, creativity, memory, and executive function [1].
For children, the stakes are considerably higher. As educational researcher Timothy Cook notes, while adults who offload thinking to AI lose capacity they have already built, children may never build that capacity at all [2]. A young learner cannot audit an AI's analysis of a historical event or a biological process if they do not yet possess the foundational domain knowledge themselves [2]. When adult interaction with AI is generally the delegation of automatable tasks, a child's interaction is more likely to be substitution - where the AI makes the micro-judgments the child is supposed to be developing [2].
The Social Pressure to Adopt
Despite these risks, the adoption of AI in schools continues, driven in part by a complex dynamic of social pressure. A 2026 study by researchers at the University of Chicago Booth School of Business found that parents' fear of their children falling behind often overshadows their concerns about AI's potential adverse effects [3]. In their research, parents who received negative information about AI - specifically that it led to a 20 percent decline in quantitative reasoning scores - were far more likely to support an outright ban on AI use in schools [3].
Crucially, however, these same parents were no less willing to pay for a premium AI subscription for their own children [3]. The demand for AI tools increased significantly as the perceived percentage of peers using AI rose. The researchers termed this a "rat-race dynamic," where parents do not necessarily believe AI is beneficial, but rather that it is socially necessary to keep up [3].
Meanwhile, students themselves are experiencing profound ambiguity. A RAND Corporation survey from early 2026 found that 62 percent of students in middle school and above used AI for homework, a significant increase from the previous year [4]. Yet, simultaneously, 67 percent of students endorsed the statement that "the more students use AI for their schoolwork, the more it will harm their critical thinking skills" - an increase of more than ten percentage points in just ten months [4]. The very students using the technology are increasingly aware of its cognitive costs.
Teachers Reclaiming Professional Autonomy
In the face of these challenges, educators are pushing back against the imposition of AI tools by EdTech companies and policymakers. The European Trade Union Committee for Education (ETUCE) has strongly articulated this resistance. As ETUCE President John MacGabhann recently stated, "Good teaching exists entirely independently of Artificial Intelligence. Good teaching is not dependent on, an emanation of or generated by AI" [5].
The union argues that AI is merely a tool, not the "philosopher's stone" that tech cheerleaders claim [5]. They warn against the risk of teachers off-loading their professional responsibility to AI, and students using it as a shortcut to bypass the necessary struggle of learning [5]. Consequently, teachers are demanding full involvement in the governance and co-design of AI in education, opposing the hasty procurement of these technologies from public funds without robust democratic guardrails [5].
Rethinking Passivity: From Capacity to Configuration
At the heart of this growing opposition is a deep concern about AI turning students into "passive learners." However, the discourse surrounding passivity requires careful unpacking. Concerns about passivity presuppose an idealized subject who ought to be active in a recognizable way: engaged, expressive, self-directed, and visibly thinking. Passivity, in this framework, appears as a deviation from this idealized form of activity, and the explanation for that deviation is assigned directly to the technology: AI makes students passive; platforms reduce attention; automation erodes thinking.
This results in a hybrid formation that dominates educational discourse. On one side is normative humanism, which defines what learners should be. On the other is causal determinism, which explains what technical systems do to them. Both positions stabilize the same binary: the active human subject versus the acting technical system. This contradiction emerges from a humanist epistemology that treats agency as a property possessed by the human subject, which is then threatened by technical mediation.
The concept of agency itself deserves closer scrutiny here. Taking a sociological perspective, Eteläpelto and colleagues define agency as the capacity of each individual to respond to changes in knowledge, practice, and the work environment [6]. Crucially, this capacity is not free-floating: it is conditioned by the symbolic and material positions that individuals hold within a given ecosystem [6]. A recent study by Littlejohn, Durán del Fierro, Kennedy, and Chisholm, published in the Journal of Workplace Learning, develops this argument in the context of digital transformation, showing how agency mediates the relationship between professionals and transformed technical environments — and how that mediation can produce either productive, agentic responses or constrained, passive ones, depending on how the system is configured [7]. The implication for education is direct: what appears as learner passivity may be less a failure of individual motivation than a structural effect of how the learning environment has been designed.
An alternative approach therefore begins by shifting the terms of the problem from capacity to configuration. In this view, passivity is not an inherent property to be corrected in learners. Instead, passivity is defined as an effect of configurations in which responses fail to reorganize the conditions of their own emergence. Under this description, passivity cannot be located solely in the learner, nor can it be reduced to outward markers of activity or engagement.
Certain technical configurations — many of which are designed explicitly in the name of "engagement" or "personalization" — tend to produce precisely this effect. Systems that pre-structure responses limit interpretive deviation. Systems that optimize for completion privilege output over revision. Systems that metricize activity privilege only what can be counted. In each case, the configuration closes down the very space of uncertainty and struggle in which genuine learning occurs. This is what Lodge and Loble, in their 2026 report on AI and cognitive offloading, describe as the "performance paradox": AI can boost a student's performance on an immediate task while simultaneously undermining the durable learning that is the actual goal of education [8]. The convenience of a pre-configured answer is precisely what Fan and colleagues have termed "metacognitive laziness" — a state in which the learner abdicates self-regulatory responsibility to the tool, depriving themselves of the very processes through which those capacities are built [9].
When passivity is understood as a configuration, it designates a contingent effect of infrastructural relations, not an inherent property of the AI or an inevitable impact on learners. Once agency is located in configuration, it becomes distributed across the relations that condition a response, rather than residing entirely in the learner or the system.
Pedagogy must shift accordingly. The goal is no longer the correction of learners, but the configuration of the relations through which response becomes possible. Passivity no longer appears as a deficit to be corrected in the student, but as an effect of relations that limit the capacity of responses to alter the conditions from which they arise. Any attempt to specify those conditions in advance — as many AI systems do — risks reinstating the same structure under a different name, returning the problem of passivity in a more refined form.
The growing scepticism toward AI in education is therefore entirely justified. It is a necessary corrective to a technological determinism that ignores the realities of cognitive development and the complexities of pedagogical configuration. Reclaiming agency in the classroom means recognizing that true learning requires the friction, revision, and structural freedom that pre-configured AI systems inherently seek to eliminate.
References
URL: https://www.media.mit.edu/publications/your-brain-on-chatgpt/
[2] Cook, T. (2026, March ). Adults lose skills to AI. Children never build them. Psychology Today.
URL: https://www.rand.org/pubs/research_reports/RRA4742-1.html
DOI: https://doi.org/10.1007/978-94-017-8902-8
URL: https://discovery.ucl.ac.uk/id/eprint/10223671/
DOI: https://doi.org/10.71741/4pyxmbnjaq.31302475
DOI: https://doi.org/10.1111/bjet.13544
About the Image
The image shows a tree with an ornate picture frame leaning against the trunk. Inside the frame, the photo has been manipulated to look glitchy. This is a photo I took myself in my neighbourhood - someone had left the picture frame leaning against a tree outside their house and I noticed it and photographed it. My inspiration for creating the image was to think about how AI machine vision already datafies the natural world and generates glitchy images of real living things such as trees, distorting our perception of them, yet AI visual generators present their images as if they are somehow more creative or fancier than those we can take with our smartphones (the fancy frame inspired this idea). I used Canva to glitch the photo and edited the glitched version to fit inside the frame in the original photo of the tree trunk.
