80 per cent of young people in the UK are using AI for their schoolwork

There are an increasing number of studies looking at how young people are using AI for learning or otherwise. Although I am not convinced by the title, “Teaching the AI-Native Generation,” a new report from Oxford University Press offers a somewhat sobering, glimpse into how young people in the UK are actually using these new tools.
What makes this report particularly interesting is the size of the survey. Researchers surveyed 2,000 students aged 13–18 across the UK and the results are both encouraging and challenging.
Unsurprisingly, the report finds that AI usage is already widespread. 8 in 10 young people are using AI for their schoolwork. In London, that figure jumps to over 90%. It’s not a niche activity; it’s a mainstream part of how they learn and do their homework. But here’s where it gets more complex. The students themselves seem acutely aware of the double-edged nature of these tools. On one hand, an overwhelming 90% believe AI has helped them develop valuable skills, from problem-solving to creative thinking. On the other, 60% also feel it has had a negative impact, making it too easy to find answers without doing the work or limiting their own creativity.
This brings us to the crucial issues of trust and critical literacy. The report reveals that fewer than half of the students feel confident they can identify AI-generated misinformation. This isn’t just a technical problem of fact-checking; it strikes at the heart of what it means to learn and think critically in an age of synthetic information.
What was encouraging, though, is that students aren’t just passively accepting this new reality. They are actively asking for help. Almost half want their teachers to help them figure out what AI-generated content is trustworthy, and over half want clearer guidelines on when it’s appropriate to use AI in their work. This isn’t a story about students trying to cheat the system; it’s a story about a generation grappling with a powerful new technology and looking to their educators for guidance. It echoes a sentiment I heard at the recent AI Pioneers’ Conference – the issue of AI in education is fundamentally pedagogical and ethical, not just technological.
So, what can be done? The report offers some very practical recommendations that resonate with my own thinking. It’s not about banning AI, but about intentional and responsible integration. The suggestions are clear and sensible:
- Be Intentional: Schools need to be clear about why they are using a particular AI tool. Is it genuinely solving a problem or just chasing a trend? And critically, does it preserve the teacher’s autonomy to make pedagogical decisions?
- Support Teachers and Learners: This is about fostering a culture of critical use. The report stresses the need for AI tools that encourage active learning and higher-order thinking, rather than just passive consumption. It also highlights the importance of providing clear guidance on the need for critical verification of any AI-generated material.
- Prioritise Safety and Privacy: This almost goes without saying, but in the rush to adopt new tools, it’s a point that can’t be overstated. Clear policies, data protection, and content moderation are not optional extras; they are fundamental.
One of the experts quoted in the report, Olga Sayer, puts it beautifully: “AI has changed how we learn, but it hasn’t changed why we learn. The ultimate goal of education remains the same — to think independently and creatively, and to grow as a person.”
This report is a timely reminder that while we are all talking about the technology, the real conversation needs to be about people. It’s about how we, as educators, can support this “AI-Native Generation” not just to use these tools, but to use them wisely, critically, and ethically. The students are ready for that conversation. The question is, are we?