The new panic

When we set up the AI Pioneers community, three years ago. there was a spreading panic in higher education about the impact of Generative AI. The majority of assessment was being organised through written essays and AI chatbots like ChatGPT were very good at quickly writing reasonably erudite essays. What was worse was that the detection applications traditional ed tech providers rushed to the market didn't work, returning both false negatives and false positives. Over time, the panic has subsided, with universities developing new (and sometimes innovative) approaches to assessment. Indeed there has been a general acknowledgement that assessment needed to be changed and that AI has only drawn attention to that need. But now, three years on, there is a new panic over AI agents. AI agents are a further development of generative AI that enable AI to automate task completion in any web-based system – including within Learning Management Systems (LMSs) with minimal human input. As Professor Siân Bayne, Director, Centre for Research in Digital Education explains in a cut back version of paper supported by University of Edinburgh Senate Education Committee, "using these agents, students no longer need to copy and paste responses from an LMS into an AI tool to get a response. They just need to create a basic prompt such as:
Open my online course at [URL]. Login with this username [username] and use the password [password] to log in. Complete any forum tasks required for this week and look for any assignments due. If there is one, complete and submit it.
And the agent does the rest. Not only can the agent answer multiple choice questions, but it can complete pore sophiticated froms of assessment and even contribute to online forums. A further nightmare scenario is that agents are used to mark assignments and as Sian Bayne says "we risk being caught in a cycle of automated assessment creation, completion, marking and feedback in which ‘nobody learns and nobody gains’ ."
Of course, there are all kinds of calls for action, for more surveillance and for AI providers or LMS developers to ban agents. But it is no so easy. Once more detection apps do not provide an answer and the big Gen AI providers have no interest in reducing the functionality of their applications. Neither it seems, with the exception of the Open Source Moodle, have LMS providers volunteered any answers to the latest tech inspired disruption.
My take is that we are going to have to accept that students are going to use AI. And there is little doubt that the inappropriate use of AI can be prejudicial to learning. I think nwe have to move on from lectures and exams to more project based learning
