The Algorithm on Trial
For years, educators have been on the front lines of a crisis they didn’t create. In classrooms and corridors they’ve witnessed the impact of social media on student mental health, attention spans, and social development. We’ve watched as algorithms have shaped how students see themselves, how they learn, and how they relate to the world.
On March 26, 2026, a jury in Los Angeles delivered a damning verdict against Meta and Google, ruling that Instagram and YouTube are “deliberately engineered” to be addictive. The court found the companies negligent in their safeguarding of children, awarding $6 million in damages to a young woman who suffered from body dysmorphia, depression, and suicidal thoughts.
The most significant takeaway from this case is the court’s acknowledgment that these platforms use algorithmic systems designed to maximise engagement at the expense of user wellbeing. As former Instagram employee Arturo Bejar testified, the product “changed from a product you used to a product that uses you.”
The social media algorithm is not neutral. It learns what holds our attention and gives us even more of it. This is often emotionally charged, extreme, comparative, negative content. So for a teenager already struggling, the algorithm actively learns to put content in their feed that deepens insecurity because that content generates the strongest engagement.
The algorithmic feed is engineered to override a user’s intention to stop. This directly conflicts with our goals of fostering sustained attention, deep work, and self-regulation in learners and in ourselves.
The verdict affirms a direct link between algorithmic amplification and psychological harm. Researchers have long documented how recommendation algorithms can push users toward harmful content loops; disordered eating, self-harm, or extremist views. Anxiety, depression, and body dysmorphia are the predictable outcomes of algorithmic systems engineered for profit.
Tech companies have previously been shielded by Section 230 of the Communications Decency Act, which protects them from liability for content published on their platforms. This verdict signals a potential shift. The court’s focus was on design, and specifically the algorithmic architecture amplifying some content while suppressing others.
For education researchers, this up a new line of inquiry: If the algorithmic design itself can be deemed negligent, what are the implications for digital literacy? Our teaching must empower learners to recognise and resist algorithmic manipulation.
This verdict won’t eliminate social media, but it creates a powerful precedent to push for laws that require platforms to disable algorithmic recommendation systems for minors by default, returning to chronological or friend-only feeds.
We need digital citizenship curricula to include critical analysis of how algorithms function, how they generate revenue, and how they shape behaviour. Everyone should have an understanding of concepts like engagement-based optimisation, filter bubbles, and the business model behind their attention.
Reference
