Is the Generative AI Boom Heading for a Bust? A View from Vocational Education

Over the past year, it has become almost impossible to avoid the relentless hype surrounding Generative AI. We are told it will revolutionize everything, from how we work and learn to the very fabric of our economy. Yet, a growing chorus of skeptical voices, from seasoned investors to tech commentators, is asking a question that feels both urgent and familiar: are we in the midst of a colossal AI bubble, and if so, what happens when it pops?
This is not just a question for Wall Street. For those of us in vocational education and training (VET), the answer has profound implications. We are on the front lines, tasked with preparing the future workforce for a world being reshaped by these technologies. Understanding the stability of the current AI boom is crucial for making informed decisions about curriculum, pedagogy, and the skills we prioritize.
The Anatomy of a Bubble
Economic bubbles occur when the price of an asset—be it Dutch tulips in the 17th century or dot-com stocks in the late 1990s—rises to levels far exceeding its intrinsic value, driven by speculation and hype. The current excitement around AI bears some of the classic hallmarks. As commentators Michael Spencer and Justin Kollar have pointed out, the "grossly overvalued private market valuations of AI startups in the U.S. is one of the classic signs and signals that we are indeed in an AI bubble." It is difficult to justify billion-dollar valuations for companies with no significant product or revenue.
This speculative frenzy is fueled by an unprecedented torrent of capital. Total AI capital expenditures in the U.S. are projected to exceed $500 billion in 2026 and 2027, a figure roughly equivalent to the annual GDP of Singapore (Derek Thompson, 2025). Yet, as the Wall Street Journal has reported, American consumers spend only a fraction of that—around $12 billion a year—on AI services. This vast disconnect between investment and revenue is a flashing red light for many analysts.
Adding to these concerns is the phenomenon of "circular financing." We are seeing a complex web of investments and partnerships where major tech players appear to be propping each other up. For instance, OpenAI strikes a multi-billion dollar deal with Oracle for data centers; Oracle, in turn, spends billions on chips from Nvidia; and Nvidia, a key backer of OpenAI, invests in other AI ventures that are also major customers for its chips. This creates a self-reinforcing cycle that can inflate the appearance of demand and value, making the ecosystem look more robust than it actually is (Noah Smith, 2025).
The Sobering Reality of Implementation
Beyond the financial acrobatics, there is a more fundamental problem: the struggle to translate AI hype into tangible business value. A widely cited MIT study (2025) sent shock waves through the industry with its finding that a staggering 95% of generative AI pilot projects are failing to deliver a meaningful return on investment. This suggests that while the technology is impressive, integrating it effectively into real-world business processes is far more difficult than the breathless headlines suggest.
We are already seeing some high-profile examples of companies pulling back from their ambitious AI plans.
- The buy-now, pay-later company Klarna laid off hundreds of customer service staff in 2024, claiming AI could do the job. Less than a year later, it was hiring again, with the company explaining that while "AI gives us speed, talent gives us empathy."
- Fast-food giants like McDonald’s and Taco Bell have ended their experiments with AI-powered voice assistants in drive-throughs after the technology struggled with the complexities of customer orders.
- Despite a billion-dollar commitment to AI, the vast majority of Coca-Cola's advertisements are still not being made with generative AI, as the technology has struggled to replicate the creative spark and emotional connection of human-led campaigns.
These examples do not mean that AI is useless, but they do highlight a critical gap between its potential and its current capabilities. As one commentator noted, many are still figuring out "what the hell AI even does," let alone how to build a sustainable business model around it.
Technology hype
The hype around Generative AI, especially by the Large Language Model companies, is not helping. It is becoming evident that scaling models by incorporative ever larger quantities of data doesn’t work. GPT 5 was largely disappointing. AI agents, the current fashion have limitations on what they can achieve, and there are growing doubts over the quality of computer code produced by Generative AI. The vast power demands of Gen AI models is leading to a boom in data centre and infrastructure development but there is no guarantee that such infrastructure will be needed in the future. And despite the large numbers of AI users, the extent of paying subscribers is far smaller.
A Balanced Perspective
Of course, not everyone is convinced that the sky is falling. Analysts at Goldman Sachs, for example, argue that while there are "signs of froth," the US tech sector is not in a full-blown bubble... yet. They point to the strong cash flow and profitability of the established tech giants, which stand in contrast to the cash-burning startups of the dot-com era. Nvidia's CEO, Jensen Huang, has also downplayed bubble fears, arguing that the demand for AI computing is real and sustainable.
In their book Bubbles and Crashes, economists Brent Goldfarb and David A. Kirsch provide a useful framework for understanding these phenomena. They argue that technological innovations with high levels of uncertainty and compelling narratives are ripe for bubble formation. Generative AI, with its grand promises of Artificial General Intelligence (AGI) and its still-undefined business models, fits this description perfectly. However, they also note that even bubbles that burst can leave behind a valuable legacy of innovation and infrastructure.
Implications for Vocational Education and Training
So, where does this leave the VET sector? We are caught between the pressure to embrace the AI revolution and the risk of investing heavily in a potential bubble. If we rush to train learners on specific AI platforms and tools that may become obsolete overnight, we are doing them a disservice. Conversely, if we ignore the genuine shifts that AI is bringing to the workplace, we risk leaving our learners unprepared.
The current uncertainty suggests that the most prudent path for VET is to focus on developing durable, transferable skills that will remain valuable regardless of which way the technological winds blow. This means prioritizing:
- Critical AI Literacy: Instead of just teaching learners how to use AI tools, we need to teach them how to think critically about them. This includes understanding their limitations, biases, and ethical implications. It means fostering a healthy skepticism of the hype and a nuanced understanding of where AI can—and cannot—add value.
- Human-Centric Skills: As the Klarna example illustrates, there are some things that AI, at least in its current form, cannot replicate. Empathy, creativity, complex problem-solving, and collaborative teamwork are becoming more, not less, important in an age of automation. Our curricula should reflect this.
- Adaptability and Lifelong Learning: The one certainty is that the technological landscape will continue to change at a rapid pace. The most important skill we can impart to our learners is the ability to adapt, unlearn, and relearn throughout their careers. This requires a shift away from static qualifications and towards a more dynamic, lifelong approach to learning.
- A Focus on the Process, not just the Product: As we integrate AI into our own teaching and learning practices, we must be wary of the same pitfalls that have led to the 95% failure rate in the corporate world. We should start small, focus on real pedagogical problems, and critically evaluate the impact of any new technology before scaling it up.
The AI boom is a fascinating, complex, and often bewildering phenomenon. Whether it ends in a spectacular crash or a gentle deflation remains to be seen. For those of us in vocational education, the challenge is to navigate this uncertainty with a clear-eyed focus on our core mission: to empower learners with the skills, knowledge, and critical perspective they need to thrive in the world of tomorrow, whatever it may look like.
References
- Dalio, R. (2025, October 28). Ray Dalio says a risky AI market bubble is forming, but may not pop until the Fed tightens. CNBC. https://www.cnbc.com/2025/10/28/ray-dalio-bubble-ai-federal-reserve.html
- Goldman Sachs. (2025, October 28). Top of Mind: AI: in a bubble?. https://www.goldmansachs.com/insights/top-of-mind/ai-in-a-bubble
- Merchant, B. (2025, October 27). AI Is the Bubble to Burst Them All. WIRED. https://www.wired.com/story/ai-bubble-will-burst/
- Thompson, D. (2025, October 2). This Is How the AI Bubble Will Pop. Derek Thompson. https://www.derekthompson.org/p/this-is-how-the-ai-bubble-will-pop
- Pinion, N. (2025, October 22). Should we worry about AI's circular deals?. Noahpinion. https://www.noahpinion.blog/p/should-we-worry-about-ais-circular
- Compliance Week. (2025, October 10). What compliance can learn from a 95 percent AI pilot failure rate. https://www.complianceweek.com/opinion/what-compliance-can-learn-from-a-95-percent-ai-pilot-failure-rate/36278.article
- MLQ.ai. (2025, October 10). Klarna CEO admits aggressive AI job cuts went too far, starts hiring again after US IPO. https://mlq.ai/news/klarna-ceo-admits-aggressive-ai-job-cuts-went-too-far-starts-hiring-again-after-us-ipo/
- BBC News. (2025, August 29). Taco Bell rethinks AI drive-through after man orders.... https://www.bbc.com/news/articles/ckgyk2p55g8o
- Forbes. (2024, November 16). Coca Cola's AI-Generated Ad Controversy, Explained. https://www.forbes.com/sites/danidiplacido/2024/11/16/coca-colas-ai-generated-ad-controversy-explained/
- Cedefop. (2025, July 25). Germany: AI emerging as key VET competence. https://www.cedefop.europa.eu/en/news/germany-ai-emerging-key-vet-competence
About the image
The image is a digitally altered medieval-style illustration featuring Penelope (from Greek mythology) labeled by name. She appears seated, engaged in weaving—but instead of traditional thread, the loom is overlaid with binary code (1s and 0s), symbolising the opaque processes of algorithmic technologies. This piece contrasts manual weaving with algorithmic generation, and invites questions about who programs, who controls, and who gets displaced or disoriented in the development and deployment of AI. The chaotic table and spilled drink represents the poor implementation of technologies and the unintended consequences that arise when workers are excluded from planning and oversight. The piece combines digital collage and image manipulation, layering classical woodcut imagery with binary code overlays, glitch patterns, and digital motifs like web graphics and transparency grids. A mash-up of medieval manuscript aesthetics and contemporary data visualisation. The clash of styles underscores the tension between old systems of labor and new algorithmic frameworks. All images were taken from public domain. This image was submitted as part of the Digital Dialogues Art Competition, which was run in partnership with the ESRC Centre for Digital Futures at Work Research Centre (Digit) and supported by the UKRI ESRC.
