Artificial intelligence doomers stay the course despite hype backlash

A string of disappointments and bubble talk has emboldened artificial intelligence accelerationists, but prominent artificial intelligence safety advocates say their core concerns about artificial general intelligence risk remain intact, even as their timelines stretch.

A small but influential community of artificial intelligence safety advocates, often labeled “doomers,” is navigating a backlash after months of talk about an artificial intelligence investment bubble and a widely perceived letdown from OpenAI’s GPT-5. These researchers, scientists, and policy experts argue that advanced artificial intelligence could pose an existential risk to humanity, particularly once artificial general intelligence (AGI) and then superintelligence arrive. While critics point to stalled hype and policy pushback to dismiss imminent catastrophe, doomers insist that their argument has never depended on AGI being right around the corner and that recent turbulence is a distraction from long-term dangers.

GPT-5, billed by Sam Altman as feeling “like a PhD-level expert” and so capable it made him feel “useless relative to the AI,” achieved state-of-the-art benchmark scores but was marred by technical bugs and a chaotic decision to cut off older models. Accelerationists such as Trump administration artificial intelligence czar David Sacks and White House senior policy advisor Sriram Krishnan seized on the disappointment to declare that “doomer narratives were wrong” and that “this notion of imminent AGI has been a distraction and harmful and now effectively proven wrong.” Doomers counter that these reactions threaten fragile progress on regulation, including the EU artificial intelligence act, California’s SB 53, and emerging congressional interest in AGI risk, by encouraging Washington to overreact to short-term underperformance.

Interviews with 20 artificial intelligence safety and governance figures show a movement recalibrating rather than retreating. Nobel and Turing laureates Geoffrey Hinton and Yoshua Bengio both describe large uncertainty but see trends still pointing toward human-level and then superintelligent systems, with Hinton noting that “most experts who know a lot about AI believe it’s very probable that we’ll have superintelligence within the next 20 years.” Stuart Russell stresses that safety “isn’t about imminence,” comparing AGI preparation to not waiting until “2066” to respond to a predicted asteroid in “2067,” and argues that acceptable extinction risk should be closer to “one in a billion,” not the “one in five” some companies casually reference. Others, including Helen Toner, Jeffrey Ladish, David Krueger, and Daniel Kokotajlo, say their AGI timelines have lengthened slightly but remain within the next few years to 20 years, which they see as still demanding urgent work on control, governance, and public awareness.

Several experts worry about credibility and public perception as timelines shift. Toner fears that aggressive AGI dates like “2027” risk a “boy-who-cried-wolf moment” if they fail, allowing opponents to dismiss the entire artificial intelligence safety project. Krueger and Kokotajlo emphasize that doomers’ core claims do not require very short timelines or faith that current large language models will directly yield AGI, only that the stakes are high enough that even a modest probability of catastrophe justifies significant effort. Bengio draws a parallel to climate change, warning that business and government leaders still treat artificial intelligence as just another powerful technology rather than a force that could fundamentally transform the world. Across interviews, a common theme emerges: recent setbacks may slightly extend the clock, but for those convinced that AGI is likely within the next 30 years, the world is still “far from ready,” and the push for artificial intelligence safety and regulation must accelerate, not fade.

55

Impact Score

Artificial intelligence labs race to turn virtual materials into real-world breakthroughs

Startups like Lila Sciences, Periodic Labs, and Radical AI are betting that autonomous labs guided by artificial intelligence can finally turn decades of virtual materials predictions into real compounds with commercial impact, but the field is still waiting for a definitive breakthrough. Their challenge is to move beyond simulations and hype to deliver synthesized, tested materials that industry will actually adopt.

The great Artificial Intelligence hype correction of 2025

After a breakneck cycle of product launches and bold promises, the Artificial Intelligence industry is entering a more sober phase as stalled adoption, diminishing leaps in model performance, and shaky business models force a reset in expectations. Researchers, investors, and executives are now reassessing what large language models can and cannot do, and what kind of Artificial Intelligence future is realistically taking shape.

Sam Altman’s role in shaping Artificial Intelligence hype

Sam Altman’s sweeping promises about superintelligent systems and techno-utopia have helped define how Silicon Valley and the public imagine the future of Artificial Intelligence, often ahead of what the technology can actually prove.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.