A machine-based system designed to operate with varying levels of autonomy
The European Union has taken a big step towards regulating artificial intelligence technology. The European Parliament last passed the EU AI Act, a landmark legislation designed to protect fundamental rights, encourage innovation, and position Europe as a leader in AI. The AI Act, the first of its kind globally, will come into effect in phases over the next two years.
While AI is usually defined as a computer system that carries out tasks you would normally associate with human levels of intelligence, the act has a more detailed take, describing the AI technology it regulates as a “machine-based system designed to operate with varying levels of autonomy”, which obviously covers tools like ChatGPT.
This system may show “adaptiveness after deployment” – ie it learns on the job – and infers from the inputs it receives “how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments”. This definition covers chatbots, but also AI tools that, for instance, sift through job applications.
It categorises AI systems by risk level, ranging from low-risk applications like spam filters to high-risk tools in critical infrastructure, including education, and sets stringent guidelines for the highest risk AI systems.
The AI Act’s focus on fundamental rights includes a ban on specific AI uses in education, which could significantly reshape how AI is employed in schools. Notably, the Act prohibits emotion recognition technologies in educational settings. This measure is aimed at safeguarding the rights and well-being of students, ensuring that AI tools are used in a manner that respects their privacy and mental health.
Moreover, the legislation’s emphasis on transparency and accountability in AI systems will influence educational technology development. Developers of AI tools for education will need to ensure their systems are transparent, traceable, and compliant with the stringent governance regime for high-risk AI systems. This could lead to more responsible and ethical AI applications in the educational sector, aligning with European values and fundamental rights.
However, the Act has stirred debate among EU member states, with concerns that it might disadvantage domestic companies and favour non-EU competitors. Nevertheless, the EU believes the legislation will foster a safer and more equitable AI landscape.
The Act’s implications extend beyond the immediate technological changes. It challenges us to reconsider the role of AI in education, necessitating adaptations in teaching methods, curriculum development, and student assessment. The legislation presents an opportunity to rethink and redesign educational models in light of AI’s potential and risks.
In essence, the EU AI Act sets a precedent for AI governance globally. Its impact on education is profound, urging a reevaluation of how AI tools are integrated into learning environments. As the Act comes into force, it will be crucial for educators, policymakers, and AI developers to collaborate, ensuring AI’s role in education aligns with ethical standards and enhances the learning experience without compromising student rights and well-being.