The European Commission has launched a consultation to develop guidelines and Code of Practice on transparent AI systems

The European AI Act which entered into force on 1 August 2024, and has wide ranging measures, especilly for education and for the health sector. However most of the measures are being introduced gradually, with ongoing consultations between EU member states and sector represenatives
One measure is a commitment to fostering responsible and trustworthy AI development and deployment in the EU. The transparency obligations will be applicable from 2 August 2026.
This will help to ensure that users are informed when they are interacting with an AI system. To that end, the Commission has launched a consultation to develop guidelines and a code of practice on AI transparency obligations, based on the provisions of the Artificial Intelligence Act (AI Act).
The AI Act obliges deployers and providers of generative AI to inform people when they are interacting with an AI system, as well as when they are exposed to emotion recognition or biometric categorisation systems, and to content generated or manipulated by an AI system.
The Commission invites providers and deployers of interactive and generative AI models and systems as well as biometric categorisation and emotion recognition systems, private and public sector organisations, academic and research experts, civil society representatives, supervisory authorities and citizens to share their views by 9 October 2025. The consultation is accompanied by a call for expression of interest, for stakeholders to participate, by 9 October 2025, in the creation of the Code of Practice.
About the Image
This image is about the exploitation of labour and the inherent harms of training AI models. Since the decision was made by large tech companies to pursue scale at all costs, LLM’s are currently trained on massive polluted data sets, which are essentially the entirely of the English-language internet, making it difficult to properly filter, curate or clean the data at scale. Instead, content moderators workers in the global south are required to classify and breakdown the worst text on the internet into a detailed taxonomy. This relentless exposure to reams of toxic content day in day out takes a huge toll on the metal health of workers and their communities. My images seeks to highlight this work that is invisible, exploitative and unnecessary and represents the real human harm of AI development. My inspiration for this piece came from listening to Karen Hao author of Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI on the Tech Won’t Save Us podcast. I created this image using Canva and public domain imagery from Public.work.