Public Voice
The UK ESRC Digital Good Network has published a call for abstracts for a proposed special issue in the journal Big Data and Society aiming to advance scholarship on the state-of-the-art and future prospects of including public voices in AI. They say ‘Public voice’ is not easy to define or operationalise. “There is no one ‘public’. Benefits, harms and risks are distributed unevenly. The hopes, concerns and experiences of different groups with AI vary. What has been identified as a ‘participation gap’ is worsened by insufficient and ineffective processes of consultation, implementation and ongoing management. Compounding these issues are structural […]
Consultation on AI literacy Framework
The European Commission and OECD, with support from Code.org, have released a new draft of their proposes AI Literacy Framework. The framework defines what primary and secondary students should know and be able to do in a world shaped by AI. Now the European Commission has we’re inviting feedback from educators, policymakers, researchers. They say your input will inform the final version of the AI Lit Framework and will help shape how we prepare students to engage with AI – critically, creatively, and ethically and invite feedback from “across geographies, roles, and perspectives” to ensure this framework is relevant and […]
Spurious Sovereignty
For several years Helen Beetham has been developing an increasingly critical analysis of the development and social impact of Generative AI through her blog and newsletter, Imperfact Offerings. Her latest post, Marking the Government’s homework on public sector AI, includes an analysis and critique of the UK government’s plans for automated marking and autonomous missiles, and making public data ‘safe’ by selling it to private ‘security’ businesses, all in the name of ‘sovereignty’. It goes further with an in depth analysis of the data industry and the plans by the UK government for the development of large data centres, in […]
A Solid C+ Performance or a Caricature of Teaching Behaviour?
Just a quick follow up on last Thursdays post on the newly launched Open AI study mode. I was rather surprised by the muted reaction to this = but perhaps launching in the middle of the summer holidays may have influenced this. Anyway what reaction there was, was fairly critical. As a member of OpenAI’s educator-advisor group, leading HR expert Phillipa Hardman had early access to Study Mode and gave her opinion after a week of study in “✔️ What Study Mode Gets Right → Socratic Dialogue: Guides with questions instead of giving direct answers, promoting deeper thinking (Collins & […]
Artificial Intelligence and Democratic Competences in Vocational Education and Training
Last week, along with AI Pioneers partner George Bekiaridis, I attended a Working Group meeting at the Council of Europe headquarters in Strasbourg. In a LinkedIn post, George explains what the meeting was about. I’m honored to be contributing to a vital initiative at the heart of European education policy! Last week, I participated in the inaugural meeting of the Council of Europe working group tasked with developing a new Committee of Ministers Recommendation on Vocational Education and Training (VET) and a Culture of Democracy. Read about the first meeting here: Council of Europe Newsroom – First meeting on new […]
OpenAI Study Mode: Is this really a step forward?
Julieta Longo & Digit / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/ On Wednesday, OpenAI and edtech company Instructure, the company which owns the Canvas Learning Management System, announced a partnership. Going forward, they say, AI models will be embedded within Canvas to help teachers create new types of classes, assess student performance in new ways, and take some of the drudgery out of administrative tasks. They go on to explain that at the centre of the Canvas integration is a new kind of assignment called the LLM-Enabled Assignment which allows educators to design interactive, chat-based experiences inside Canvas using OpenAI’s large language models, […]
The changing role of teachers: from epistemic authority to relational steward
Some time ago, I posted an article about J Owen Matson’s ideas about a Posthumanist Epistemology for AI and Education. Since then Matson has gathered a growing audience in LinkedIn for his frequent although sometimes difficult blog posts. I promised to follow up with an article on his reflections on the future role for teachers. And here it is. Much of Matson’s work focuses on the nature of human and machine cognition. He builds on Katherine Hayles theory of cognition as “a process that interprets information in contexts that connect it to meaning.” Matson advances “a posthumanist view of AI-human […]
Get your ticket to our final conference now!
We are delighted with the large number of registrations: […]
The Current Global AI Landscape and What It Means for Education
Donald Trump’s AI action plan has now been released, […]