Navigating the Rise of AI Agents in Education

The conversation around artificial intelligence in education is rapidly evolving. While much of the initial focus has been on generative AI tools that respond to prompts, a new frontier is emerging with the rise of agentic AI. These are systems that can not only answer questions but also plan, act, and learn to achieve goals with minimal human intervention. A recent workshop held by Jisc in the UK for their AI staff brought this topic into sharp focus, and it is clear that the implications for education and training institutions across Europe are profound and demand urgent attention.
The Jisc roundtable, as detailed in their report, framed the core challenge of agentic AI as a shift from “answering to acting.” This distinction is crucial. Unlike earlier AI tools, agentic systems can operate with a degree of autonomy, making decisions and taking actions on behalf of users. This capability opens up a host of governance questions that educational institutions must begin to address. As the Jisc report notes, “If an agent is acting without step-by-step instruction, institutions will need to define clearly where human oversight is mandatory and ensure that staff and students or learners understand where responsibility continues to sit with them.”
One of the most immediate challenges is the prospect of “Bring Your Own Agent” (BYOA), a parallel to the Bring Your Own Device (BYOD) policies of the past, but with far more complex implications. As the Jisc report highlights, a device provides access, but an agent can initiate actions. This raises critical questions about academic integrity, equity, and the very nature of learning. If students can deploy personal agents to complete assignments, contribute to forums, and even take assessments, how can institutions ensure fairness and comparability? The Ohio State University’s ASC Office of Distance Education echoes these concerns, noting that “academic integrity arises as a pressing issue, as the content-generation capabilities of Agentic AI blur lines of authorship.”
The workforce and skills implications of agentic AI are equally significant. The Jisc report points out that “effective use of agentic systems depends on clear goal setting and structured delegation. Yet delegation is not a universal skill.” This highlights a critical new area for skills development. As agentic AI becomes more integrated into the workplace, the ability to frame tasks clearly, set goals effectively, and delegate to AI systems will become a key competency. This is a challenge that vocational education and training (VET) providers, in particular, must address. The Cedefop AI skills survey from 2024 revealed a significant AI literacy gap among European workers, with 40-60% having poor AI literacy. This underscores the urgent need for training that goes beyond basic digital skills to encompass the more nuanced competencies required to work alongside agentic AI.
At a European level, policymakers are beginning to grapple with these issues. The EU AI Act, with its risk-based approach, classifies many educational AI applications as “high-risk,” subjecting them to stringent requirements for transparency, human oversight, and data governance. The Act’s Article 4, which came into effect in February 2025, mandates a sufficient level of AI literacy for staff and others dealing with AI systems. The European Commission’s “AI Continent Action Plan” and initiatives like the AI Skills Academy further signal a commitment to building AI talent and skills across the Union. However, as the European Data Protection Supervisor (EDPS) has warned, the autonomy of agentic AI presents unique challenges for data protection, with the potential for unforeseen data collection and profiling.
The Jisc workshop report serves as a timely call to action for the education and training sector in Europe. The rise of agentic AI is not a distant prospect; it is a present reality that demands a proactive and strategic response. Institutions can no longer afford to be reactive. They must develop clear policies on the use of AI agents, invest in AI literacy for both staff and students, and redesign assessment to focus on the development of uniquely human skills like critical thinking, creativity, and collaboration. As the Jisc report concludes, “institutions will need clear positions here. Silence creates risk.” The challenge is not to resist the tide of technological change, but to shape it in a way that enhances learning, empowers individuals, and upholds the core values of education.
References
[1] Jisc. (2026, February 12). Agentic AI Roundtable: Governance, Autonomy and the Future of Educational Agents. Jisc AI in Education Blog.
[2] Ohio State University, ASC Office of Distance Education. (2025, October 14). Agentic AI in Higher Education.
[3] Cedefop. (2025, January 30). Skills empower workers in the AI revolution. Cedefop.
[4] European Commission. (2025, December 4). AI talent, skills and literacy. Shaping Europe’s digital future.
[5] European Data Protection Supervisor. (n.d.). Agentic AI. EDPS.
[6] FeedbackFruits. (2025, February 12). What is the EU AI Act? A comprehensive overview.
About the Image
The Horizon I was part of the Tipping Point exhibition held in summer 2025 in Edinburgh. Tipping Point explores how artists can help us more wisely respond to the present realities and near-future horizons of AI. Wesley's 'The Horizon' offers glimpse into a possible near-future of post-abundance computation that also shows us how to better use the tools we have today. This image deconstructs the hardware that can be used in AI applications and its relationship to people and societies. It invites viewers to contemplate potential implications of components like visual or audio recognition, and reflect on the experiences of the people living with them. Tipping Point was funded by the Arts and Humanities Research Council (AHRC) and delivered by BRAID in partnership with Inspace at the Institute for Design Informatics, with support from Better Images of AI. Photographed by Chris Scott.
