Where are we with AI regulation?

The debate on regulating AI has not gone away. The European Union is moving ahead with the implementation of the the Artificial Intelligence Act (AI Act). It establishes a common regulatory and legal framework for AI within the European Union and came into force in August 2024, although measures will come into operation gradually over the next nine or so months.
The Act regulates the providers of AI systems and entities using AI in a professional context and classifies AI applications by their risk of causing harm. There are four levels – unacceptable, high, limited, minimal – plus an additional category for general-purpose AI.
- Applications with unacceptable risks are banned.
- High-risk applications must comply with security, transparency and quality obligations, and undergo conformity assessments.
- Limited-risk applications only have transparency obligations.
- Minimal-risk applications are not regulated.
For general-purpose AI, transparency requirements are imposed, with reduced requirements for open source models, and additional evaluations for high-capability models.
There are different risk categories depending on the type of application, with a specific category dedicated to general-purpose generative AI.
Education is classified as high-risk and depending on its intended use is subject to quality, transparency, human oversight and safety obligations, and in some cases will require a "Fundamental Rights Impact Assessment" before deployment.
Although the Act did not fulfill everything wished of it by campaigners, it makes clear that the use of AI is regulated in the European Union. It is not so in the USA. There has been regulation in some States. But now Trump is pushing and attachment to the Government Finance Bill which will make any regulation illegal.
In the UK, there is a big debate raging. The government is trying to get through legislation which would make it legal for copyrighted internet content is be used for for training Large Language Models unless content developers explicitly opt out (and it is not clear at the moment how they would do that). The opposition is being led by cultural content producers, authors, musicians etc. who are campaigning that LLM developers should have to ask (and pay?) for the use of any copyrighted material
And so it continues. Attention is increasingly focused on the aims, actions and ethics of the large technical companies. Hopefully we will see new approaches around Open Source AI which can for example allow educationists to develop AI based applications, overcoming the control exercised by those companies.
I'll try and post more practical advice for educationalists about how the European AI act impacts of education in the next two weeks.
About the image
Giant QR code-like patterns dominate the cityscapes, blending seamlessly with the architecture to suggest that algorithmic systems have become intrinsic to the very fabric of urban life. Towering buildings and the street are covered in these black-and-white codes, reflecting how even the most basic aspects of everyday life— where we walk, work, and live — are monitored. The stark black-and-white aesthetic not only underscores the binary nature of these systems but also hints at what may and may not be encoded and, therefore, lost—such as the nuanced “color” and complexity of our world. Ultimately, the piece invites viewers to consider the pervasive nature of AI-powered surveillance systems, how such technologies have come to define public spaces, and whether there is room for the “human” element. Adobe FireFly was used in the production of this image, using consented original material as input for elements of the images. Elise draws on a wide range of her own artwork from the past 20 years as references for style and composition and uses Firefly to experiment with intensity, colour/tone, lighting, camera angle, effects, and layering.
One Comment
📈 Message: TRANSFER 1.363305 BTC. Get =>> https://yandex.com/poll/7HqNsFACc4dya6qN3zJ4f5?hs=2260d4fa903e808e6e38fad30faea266& 📈
emvt2s