AI Literacy and Legislation
I’ve been interviewing the AI Pioneers team this week for the project’s annual evaluation and one thing that keeps coming up is the EU AI Act and the dynamics of responsibility. The EU AI Act became law on August 1, 2024. It says companies that make or use AI systems must ensure their employees and anyone else involved in operating those systems have enough knowledge about AI to do their job safely and effectively.
Graham, George and I explored the definition of AI Literacy in our recent preprint AI and Education Agency, Motivation, Literacy and Democracy
The European Union’s new AI Act, which became effective in August 2024, includes specific requirements for AI literacy under Article 4. This means companies and organizations that create or use AI systems must ensure their staff and collaborators understand AI to a level appropriate for their roles. This includes knowing how AI works, its implications, and responsible use - there is an assumption that AI literacy will also bring AI responsibility and somehow internal self regulation. The Dal-i generated elephant in the room being that even the developers aren’t really sure how AI works. https://www.technologyreview.com/2024/03/05/1089449/nobody-knows-how-ai-works/
The EU rules apply to developers, users, and anyone working with AI systems, even outside the EU. Each organization needs to tailor its approach based on the experience, education, and context of its team. The Act encourages AI literacy for everyone involved in the AI lifecycle, but the specifics on how to achieve this are vague.
What does the EU AI Act say about AI literacy?
The EU AI Act became law on August 1, 2024. Article 4 says;
Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.
Who does this apply to?
The rules apply to:
People or companies that develop or sell AI systems.
People or companies that use AI systems in their work.
It doesn’t matter where the company is based—the rules apply to businesses outside the EU as well. These companies must ensure their staff and others they work with understand AI, its outputs, and its impacts.
What does AI literacy mean?
The Act doesn’t give one clear definition of AI literacy but says it includes:
- Knowing how to use AI safely and interpret its results.
- Understanding how decisions made with AI can affect people.
- Making sure AI is used correctly in the specific context for which it was designed.
When do these rules start?
The AI literacy rules will be enforced starting February 2, 2025.
What happens if you don’t follow the rules?
While there are no specific penalties for ignoring AI literacy requirements, providing incorrect or misleading information about your AI practices can lead to fines of up to €7.5 million or 1% of annual global revenue.
What are the implications for VET
Vocational training courses will need to integrate AI knowledge into the curriculum, tailoring training to industry-specific needs and roles. Partnerships with employers and AI providers will help align programs with real-world applications and the existing workforce will need to up-skill. The Act also fosters some new roles such as AI-literacy trainers and compliance officers.
Resources
If like me you have more questions than answers, there are some resources including a high level summary which promises to deliver the key points within 10 minutes, and a compliance checker.
AI Pioneers are also working on a schema for the ethical use of AI in education, which will be useful for those of you tasked with teaching this to the cohort of current and soon to be employed learners who will be ‘using AI in their work’
References