A small but influential community of artificial intelligence safety advocates, often labeled “doomers,” is navigating a backlash after months of talk about an artificial intelligence investment bubble and a widely perceived letdown from OpenAI’s GPT-5. These researchers, scientists, and policy experts argue that advanced artificial intelligence could pose an existential risk to humanity, particularly once artificial general intelligence (AGI) and then superintelligence arrive. While critics point to stalled hype and policy pushback to dismiss imminent catastrophe, doomers insist that their argument has never depended on AGI being right around the corner and that recent turbulence is a distraction from long-term dangers.
GPT-5, billed by Sam Altman as feeling “like a PhD-level expert” and so capable it made him feel “useless relative to the AI,” achieved state-of-the-art benchmark scores but was marred by technical bugs and a chaotic decision to cut off older models. Accelerationists such as Trump administration artificial intelligence czar David Sacks and White House senior policy advisor Sriram Krishnan seized on the disappointment to declare that “doomer narratives were wrong” and that “this notion of imminent AGI has been a distraction and harmful and now effectively proven wrong.” Doomers counter that these reactions threaten fragile progress on regulation, including the EU artificial intelligence act, California’s SB 53, and emerging congressional interest in AGI risk, by encouraging Washington to overreact to short-term underperformance.
Interviews with 20 artificial intelligence safety and governance figures show a movement recalibrating rather than retreating. Nobel and Turing laureates Geoffrey Hinton and Yoshua Bengio both describe large uncertainty but see trends still pointing toward human-level and then superintelligent systems, with Hinton noting that “most experts who know a lot about AI believe it’s very probable that we’ll have superintelligence within the next 20 years.” Stuart Russell stresses that safety “isn’t about imminence,” comparing AGI preparation to not waiting until “2066” to respond to a predicted asteroid in “2067,” and argues that acceptable extinction risk should be closer to “one in a billion,” not the “one in five” some companies casually reference. Others, including Helen Toner, Jeffrey Ladish, David Krueger, and Daniel Kokotajlo, say their AGI timelines have lengthened slightly but remain within the next few years to 20 years, which they see as still demanding urgent work on control, governance, and public awareness.
Several experts worry about credibility and public perception as timelines shift. Toner fears that aggressive AGI dates like “2027” risk a “boy-who-cried-wolf moment” if they fail, allowing opponents to dismiss the entire artificial intelligence safety project. Krueger and Kokotajlo emphasize that doomers’ core claims do not require very short timelines or faith that current large language models will directly yield AGI, only that the stakes are high enough that even a modest probability of catastrophe justifies significant effort. Bengio draws a parallel to climate change, warning that business and government leaders still treat artificial intelligence as just another powerful technology rather than a force that could fundamentally transform the world. Across interviews, a common theme emerges: recent setbacks may slightly extend the clock, but for those convinced that AGI is likely within the next 30 years, the world is still “far from ready,” and the push for artificial intelligence safety and regulation must accelerate, not fade.
