The article charts how the generative Artificial Intelligence boom that began when OpenAI released ChatGPT in late 2022 has run into a reckoning in 2025, as breathless expectations collide with slower-than-promised progress and underwhelming business results. After a period in which technology companies raced to outdo each other with voice, image, and video features and insisted that progress was exponential, studies from organizations including the US Census Bureau and Stanford University now suggest that business uptake of Artificial Intelligence tools is stalling, with many projects never advancing beyond pilot stages. The enormous costs sunk into training and running large models have raised questions about whether the biggest Artificial Intelligence companies will ever recoup their investments, especially as updates to core systems no longer feel like transformative leaps.
The lackluster reception of OpenAI’s GPT-5 in August epitomizes this shift. Despite months of hype from CEO Sam Altman, who called it a “PhD-level expert in anything,” GPT-5 landed as more incremental than revolutionary, prompting observers like Artificial Intelligence researcher and YouTuber Yannic Kilcher to declare that “the era of boundary-breaking advancements is over” and that “AGI is not coming.” Yet the article stresses that this narrative of hitting a wall is too simplistic, pointing to OpenAI’s earlier releases such as o1, o3, and Sora 2, and new models like Google DeepMind’s Nano Banana Pro, as evidence that progress continues, even if the wow factor is fading. The author argues that what is really happening is a necessary “hype correction,” in which expectations are lowered to something closer to reality.
A key theme is that large language models are not the whole story of Artificial Intelligence and are not the shortcut to artificial general intelligence that some evangelists once suggested. Even Ilya Sutskever, now at Safe Superintelligence and formerly a leading voice at OpenAI, emphasizes that LLMs can learn many specific tasks but do not seem to grasp the underlying principles, generalizing “dramatically worse than people.” The article also challenges the idea that Artificial Intelligence is a quick fix for business, highlighting a July MIT study whose headline finding was that a whopping 95% of businesses that had tried using Artificial Intelligence had found zero value in it. That 95% figure refers to bespoke systems that failed to scale beyond pilots after six months and does not capture widespread unofficial use of chatbots by employees, which the same researchers found at around 90% of surveyed companies.
Further evidence comes from an Upwork study showing that agents powered by top LLMs from OpenAI, Google DeepMind, and Anthropic often failed to complete straightforward workplace tasks on their own, though success rates rose sharply when agents worked alongside skilled people. This supports Andrej Karpathy’s view that chatbots can outperform the average human on tasks such as giving legal advice, fixing bugs, or doing high school math, but not expert practitioners, which helps explain why they have boosted individual productivity without “joining the workforce” in the way Sam Altman predicted when he wrote that “in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies.” The article concludes that Artificial Intelligence is not replacing humans anytime soon, but is gradually being woven into workflows, with the most valuable uses still emerging.
The piece then asks whether Artificial Intelligence is in a bubble and, if so, what kind. It contrasts the subprime mortgage bubble of 2008, which left little but debt, with the dot-com bubble of 2000, which destroyed many companies yet left behind the infant internet and giants like Google and Amazon. Today, there is still no clear business model for LLMs, and many economists worry about unprecedented investment in data center infrastructure, compounded by circular deals “with Nvidia paying OpenAI to pay Nvidia, and so on.” Some investors, such as Silver Lake cofounder Glenn Hutchins, are more relaxed, citing the fact that “every one of these data centers-almost all of them-has a solvent counterparty that is contracted to take all the output they’re built to suit,” and highlighting Microsoft’s role as a highly creditworthy customer. At the same time, examples like Synthesia, which Nathan Benaich says now has around 55,000 corporate customers and brings in around ? million a year after initial skepticism about its market, show how niche-looking applications can quickly become significant businesses, even as the exact valuations remain unclear.
Finally, the article situates the current moment in a longer history, noting that ChatGPT capped a decade of deep learning work whose roots stretch back to the 1980s and even earlier Artificial Intelligence research in the 1950s. High-quality submissions to major Artificial Intelligence conferences are at record levels, to the point that some accepted papers are being turned away for capacity reasons, even as preprint servers like arXiv are clogged with Artificial Intelligence-generated “research slop.” Sutskever describes this phase as a return to “the age of research,” a bottleneck that could precede new breakthroughs rather than a terminal slowdown. Investor Nathan Benaich argues that hype has, in some ways, been useful because it attracted the money and talent to drive genuine advances, transforming “research nerds” into the center of the technology world. The article ends by suggesting that the collapse of unsustainable hype is healthy: it opens space to rigorously assess Artificial Intelligence’s real capabilities and flaws, figure out how to apply it beneficially, and acknowledge that we do not yet fully understand what we have already built, let alone what comes next.
