AI, Personal Learning Environments, Personalisation, Pedagogy and Agency

Sometimes the sheer volume of posts, newsletters, opinions about AI and education can feel overwhelming. And given the amount of this that seemingly comes from AI, the quality is not always great. But recently there seems to be an encouraging move towards people writing well thought out essays which agree with them or not are raising important issues.
One which has made me think is ‘Beyond Augmentation: Toward a Posthumanist Epistemology for AI and Education’, by J. OwenMatson who, his website says, “explores literature, film, art, AI, and educational technology through the lenses of cognition, media theory, and philosophy—tracing how we learn, create, and think across shifting systems.”
Matson examines the emerging discourse which, he says, “increasingly frames AI in education through the language of enhancement: augmentation, support, partnership.”
And while he acknowledges this as protecting against automation-as-replacement in education he says, “it risks cementing a hierarchical model of cognition that fails to engage with how human and AI systems co-construct knowledge in real time.” He asserts that we are moving from a humanist to a posthumanist epistemology. In a humanist framework, he says “the boundaries of the subject are secure, cognition is internal, and learning is understood as an act of transfer or mastery. In a posthumanist frame, cognition is emergent, relational, and enacted within sociotechnical systems that exceed individual control or comprehension. AI doesn't merely augment our ability to think; it changes what thinking is, what counts as knowledge, and who (or what) gets to participate in its construction.”
It is a long essay and I will return to it in a future blog, especially his ideas about the future role of teachers. But for now I want to pick up what he says about personalised learning. Personalised learning has concerned me for some time. There has been a big confusion between personalised learning and Personal Learning Environments (which are themselves attracting new attention).
The idea of a Personal Learning Environment (PLE) recognises that learning is ongoing and seeks to provide tools to support that learning (Attwell, 2007). It also recognises the role of the individual in organising his or her own learning. Moreover, the pressures for a PLE are based on the idea that learning will take place in different contexts and situations and will not be provided by a single learning provider. Linked to this is an increasing recognition of the importance of informal learning. Research undertaken into Personal Learning Environments and into the impact of online learning during the Covid 19 pandemic have pointed to the importance of agency for learning. In a paper (Virginia Portillo et al, 2024) reporting on a qualitative research study which explored the practical and emotional experiences of young people aged 13–17 using algorithmically-mediated online platforms, young people were found to have “a desire to be informed about what data (both personal and situational) is collected and how, and who uses it and why. Participants claimed that whilst transparency is an important first principle, they also need more control over how platforms use the information they collect from users, including more regulation to ensure transparency is both meaningful and sustained.”
Previous research into Personal Learning Environments suggests that agency is central to the development of Self Regulated Learning (SRL) which is important for Lifelong Learning and Vocational Education and Training (Buchem, Attwell & Torres, 2011). Self Regulated Learning is the process whereby students activate and sustain cognition, behaviours, and affects, which are systematically oriented toward attainment of their goals (Schunk & Zimmerman, 1994). And SRL drives the cognitive, metacognitive, and motivational strategies that learners employ to manage their learning (Panadero, 2017).
Metacognitive strategies guide learners’ use of cognitive strategies to achieve their goals, including setting goals, monitoring learning progress, seeking help, and reflecting on whether the strategies used to meet the goal were useful (Pintrich, 2004; Zimmerman, 2008).
The introduction of generative AI in education raises important questions about learner agency. Agency refers here to the capacity of individuals to act independently and make their own free choices (Bandura, 2001). Yet the personalised learning being promoted as one of the big advances of generative AI in education often takes away agency, reducing room to act and reducing any choice. As Matson says, “even thoughtful models often fail to grant learners epistemic agency—treating them as data points to adapt to rather than co-participants in knowledge formation.” He continues: “At its best, personalization is framed as a corrective to one-size-fits-all pedagogy: it allows instruction to adapt to individual learners, matching pace, content, and support to a student’s unique profile.” “Too often, personalization is imagined as a unidirectional system in which AI detects, diagnoses, and delivers, while the learner passively receives the appropriate intervention. Even in more nuanced versions of this model, the structure remains asymmetrical: AI adapts, the learner is adapted to.” In a footnote he elaborates on this perspective: “A growing body of critique has questioned the overuse and conceptual vagueness of “personalization” in EdTech. Scholars note that many so-called personalized systems reduce learning to the automated matching of content to inferred needs—obscuring asymmetrical power dynamics while reinforcing a delivery model of education. Personalization, in this context, often functions more as a rhetorical device than a meaningful pedagogical design.”
Matson puts forward his own ideas about how AI can support new pedagogical approaches based on dialogical learning. He says student agency can be a force in shaping the epistemic architecture of the system itself. The present model is asymmetrical: AI adapts, the learner receives. “Missing here is a view of personalization not as optimization but as dialogic space-making—a site in which learners negotiate, resist, and reconfigure how knowledge emerges in tandem with the system. He quotes Wegerif (2013) who “develops the concept of dialogic space as a pedagogical and epistemological alternative to monologic instruction. Drawing on Bakhtin and Vygotsky, he argues that digital technologies can support dialogic learning only if they are intentionally designed to sustain open-ended, co-constructive inquiry—rather than reinforcing the closed, outcome-driven logic typical of EdTech.”
References
Attwell, G. (2007) Personal Learning Environments—The Future of eLearning? Elearning Papers, 2, 1-8.
Bandura, A. (2001) Social Cognitive Theory of Mass Communication, Media Psychology, volume 3, pp 265 - 299, https://api.semanticscholar.org/CorpusID:35687430
Buchem, I., Attwell, G., Torres, R. (2011) Understanding Personal Learning Environments: Literature review and synthesis through the Activity Theory lens, Research Gate https://www.researchgate.net/publication/277729312_Understanding_Personal_Learning_Environments_Literature_review_and_synthesis_through_the_Activity_Theory_lens
Matson J. O. (2025) Beyond Augmentation: Toward a Posthumanist Epistemology for AI and Education’
Panadero, E. (2017). A review of self-regulated learning: Six models and four directions for research. Frontiers in Psychology. (pp 1-28) https://doi.org/10.3389/fpsyg.2017.00422
Pintrich, P. R. (2004). A conceptual framework for assessing motivation and self-regulated learning in college students. Educational Psychology Review, 16(4), 385–407.
Portillo,P., Dowthwaite, L., Creswick, H., Pérez Vallejos, E., Ten Holter, C., Koene, A., Jirotka, M., Zhao, J. (2024) A call to action: Designing a more transparent online world for children and young people, Journal of Responsible Technology, Volume 19, pp 1-10
Wegerif, R (2013) Dialogic: Education for the Internet Age. London: Routledge (ISBN-10: 0415536782 ISBN-13: 978-0415536783).
Zimmerman, B. J. (2008). Investigating self-regulation and motivation: Historical background, methodological developments, and future prospects. American Educational Research Journal, 45(1), 166–183. https://doi.org/10.3102/0002831207312909
About the Image
The image depicts a Cambridge University classroom learning about AI. The image seeks to deconstruct notions of learning about AI as “new” by drawing upon visual diagrams of data science networks to visually ground images in the origins of computing. Simultaneously, it seeks to capture the flow of information and communication in a classroom.