Artificial intelligence doomers stay the course despite hype backlash

A string of disappointments and bubble talk has emboldened artificial intelligence accelerationists, but prominent artificial intelligence safety advocates say their core concerns about artificial general intelligence risk remain intact, even as their timelines stretch.

A small but influential community of artificial intelligence safety advocates, often labeled “doomers,” is navigating a backlash after months of talk about an artificial intelligence investment bubble and a widely perceived letdown from OpenAI’s GPT-5. These researchers, scientists, and policy experts argue that advanced artificial intelligence could pose an existential risk to humanity, particularly once artificial general intelligence (AGI) and then superintelligence arrive. While critics point to stalled hype and policy pushback to dismiss imminent catastrophe, doomers insist that their argument has never depended on AGI being right around the corner and that recent turbulence is a distraction from long-term dangers.

GPT-5, billed by Sam Altman as feeling “like a PhD-level expert” and so capable it made him feel “useless relative to the AI,” achieved state-of-the-art benchmark scores but was marred by technical bugs and a chaotic decision to cut off older models. Accelerationists such as Trump administration artificial intelligence czar David Sacks and White House senior policy advisor Sriram Krishnan seized on the disappointment to declare that “doomer narratives were wrong” and that “this notion of imminent AGI has been a distraction and harmful and now effectively proven wrong.” Doomers counter that these reactions threaten fragile progress on regulation, including the EU artificial intelligence act, California’s SB 53, and emerging congressional interest in AGI risk, by encouraging Washington to overreact to short-term underperformance.

Interviews with 20 artificial intelligence safety and governance figures show a movement recalibrating rather than retreating. Nobel and Turing laureates Geoffrey Hinton and Yoshua Bengio both describe large uncertainty but see trends still pointing toward human-level and then superintelligent systems, with Hinton noting that “most experts who know a lot about AI believe it’s very probable that we’ll have superintelligence within the next 20 years.” Stuart Russell stresses that safety “isn’t about imminence,” comparing AGI preparation to not waiting until “2066” to respond to a predicted asteroid in “2067,” and argues that acceptable extinction risk should be closer to “one in a billion,” not the “one in five” some companies casually reference. Others, including Helen Toner, Jeffrey Ladish, David Krueger, and Daniel Kokotajlo, say their AGI timelines have lengthened slightly but remain within the next few years to 20 years, which they see as still demanding urgent work on control, governance, and public awareness.

Several experts worry about credibility and public perception as timelines shift. Toner fears that aggressive AGI dates like “2027” risk a “boy-who-cried-wolf moment” if they fail, allowing opponents to dismiss the entire artificial intelligence safety project. Krueger and Kokotajlo emphasize that doomers’ core claims do not require very short timelines or faith that current large language models will directly yield AGI, only that the stakes are high enough that even a modest probability of catastrophe justifies significant effort. Bengio draws a parallel to climate change, warning that business and government leaders still treat artificial intelligence as just another powerful technology rather than a force that could fundamentally transform the world. Across interviews, a common theme emerges: recent setbacks may slightly extend the clock, but for those convinced that AGI is likely within the next 30 years, the world is still “far from ready,” and the push for artificial intelligence safety and regulation must accelerate, not fade.

55

Impact Score

OpenClaw pushes autonomous Artificial Intelligence agents into enterprises

OpenClaw’s rapid growth is accelerating interest in persistent, self-hosted autonomous agents that run continuously instead of waiting for prompts. NVIDIA is positioning NemoClaw as a more secure reference implementation for organizations that want local control, auditability and hardened deployment defaults.

Indiana launches Artificial Intelligence business portal

Indiana is rolling out IN AI, a statewide portal meant to help employers adopt Artificial Intelligence with practical guidance, workshops and peer support. State leaders and business groups are positioning the effort as a way to raise productivity, wages and job growth while keeping workers at the center.

Goodfire launches model debugging tool for large language models

Goodfire has introduced Silico, a mechanistic interpretability platform designed to let developers inspect and adjust model behavior during development. The company is positioning it as a way to give smaller teams deeper control over open-source models and more trustworthy outputs.

Nvidia launches nemotron 3 nano omni for enterprise agents

Nvidia has introduced Nemotron 3 Nano Omni, a multimodal open model designed to support enterprise agents that reason across vision, speech and language. The launch extends Nvidia’s push beyond hardware into models and services while targeting more efficient agentic workflows.

Intel 18A-P node improves performance and efficiency

Intel plans to present new results for its 18A-P process at the VLSI 2026 Symposium, highlighting gains in performance, power efficiency, and manufacturing predictability. The updated node is positioned as a stronger option for customers seeking 18A density with better operating characteristics.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.