AI – Productivity, Jobs and Skills
Much of the big excitement about Generative AI was driven by the idea that it would boost productivity (and thus profit). Conversely one of the fears was that it would lead to job losses although there was little or no consensus about how severe such job losses might be and indeed some commentators speculated that new jobs created by AI would balance out the losses. Early research and reports into the impact of AI were conflicted, with increasing levels of hype perhaps overwhelming more sober research findings. And even now there is only a limited consensus of the impact of […]
Recording Now Available: AI Pioneers’ session at the EODLW2024
Missed the EDEN EODLW2024 session on “AI and Education: […]
AI Literacy and Legislation
I’ve been interviewing the AI Pioneers team this week for the project’s annual evaluation and one thing that keeps coming up is the EU AI Act and the dynamics of responsibility. The EU AI Act became law on August 1, 2024. It says companies that make or use AI systems must ensure their employees and anyone else involved in operating those systems have enough knowledge about AI to do their job safely and effectively. Graham, George and I explored the definition of AI Literacy in our recent preprint AI and Education Agency, Motivation, Literacy and Democracy The European […]
Locally developed technology best serves communities
I’m impressed by the work being carried out by the DAIR Institute. DAIR stands for Distributed AI Research Institute. They say “We are an interdisciplinary and globally distributed AI research institute rooted in the belief that AI is not inevitable, its harms are preventable, and when its production and deployment include diverse perspectives and deliberate processes it can be beneficial. Our research reflects our lived experiences and centers our communities.” They believe locally-developed technology better serves its communities than solutions imposed from afar. In a recent blog, Decentralized, Locally-Tailored Technology, Nyalleng Moorosi looks at the difference between the tools that […]
Concept development and implementation strategies for AI in VET – workshop in Bremen
On November 6, 2024, the Institute of Technology and […]
How to be a trusted voice online
UNESCO have launched an online course in response to a survey of digital content creators, 73 per cent of whom requested training. According to UNESCO the course aims to empower content creators to address disinformation and hate speech and provide them with a solid grounding in global human rights standards on both Freedom of Expression and Information. The content was produced by media and information literacy experts in close collaboration with leading influencers around the world to directly address the reality of situations experienced by digital content creators. The course has just started and runs for 4 weeks; over 9 000 […]
Public Domain Data
I’ve been looking at how we could use Open Source software to develop Generative AI applications for education. Of course one of the issues is data for training the AI. And its interesting that reports say that the quality of training data is getting worse, probably because so much poor quality data is being produced by AI. So I was interested in an article, The Making of PD12M: Image Acquisition, published on the Spawning blog. It reports that in the evolving landscape of AI data collection, the Spawning team has introduced Public Domain 12M (PD12M), a innovative 12.4 million image-text […]
Social generative AI for education
I am very impressed with a paper, Towards social generative AI for education: theory, practices and ethics, by Mike Sharples. Here is a quick summary but I recommend to read the entire article. In his paper, Mike Sharples explores the evolving landscape of generative AI in education by discussing different AI system approaches. He identifies several potential AI types that could transform learning interactions: generative AIs that act as possibility generators, argumentative opponents, design assistants, exploratory tools, and creative writing collaborators. The research highlights that current AI systems primarily operate through individual prompt-response interactions. However, Sharples suggests the next significant […]
AI and Education: Agency, Motivation, Literacy and Democracy
Graham Attwell, George Bekiaridis and Angela Karadog have written a new paper, AI and Education: Agency, Motivation, Literacy and Democracy. The paper has been published as a preprint for download on the Research Gate web site. This is the abstract. This paper, developed as part of the research being undertaken by the EU Erasmus+ AI Pioneers project, examines the use of generative AI in educational contexts through the lens of Activity Theory. It analyses how the integration of large language models and other AI-powered tools impacts learner agency, motivation, and AI literacy. The authors conducted a multi-pronged research approach including […]
Do we need specialised AI tools for education and instructional design?
In last weeks edition of her newsletter, Philippa Hardman reported on an interesting research project she has undertaken to explore the effectiveness of Large Language Models (LLMs) like ChatGPT, Claude, and Gemini in instructional design. It seems instructional designers are increasingly using LLMs to complete learning design tasks like writing objectives, selecting instructional strategies and creating lesson plans. The question Hardman set out to explore was: “how well do these generic, all-purpose LLMs handle the nuanced and complex tasks of instructional design? They may be fast, but are AI tools like Claude, ChatGPT, and Gemini actually any good at learning […]