The road to artificial general intelligence

Founders and researchers debate whether current advances will yield Artificial Intelligence that matches human flexibility, and which hardware, software, and orchestration steps would make that possible.

Contemporary models can discover drugs and write code, yet they stumble on puzzles a lay person solves in minutes. That gap sits at the heart of the effort to produce artificial general intelligence, systems that rival human versatility across domains. The tension is practical and conceptual: breakthroughs in narrow tasks coexist with glaring failures in simple reasoning, and that duality shapes expectations about when and how generality might arrive.

Industry figures place radically different timelines on the horizon. Dario Amodei, co-founder of Anthropic, predicts the emergence of ´powerful artificial intelligence´ as early as 2026, describing capabilities such as Nobel prize level domain expertise, seamless switching between text, audio and the physical world, and autonomous goal-directed reasoning rather than mere prompt response. Sam Altman, chief executive of OpenAI, says AGI-like properties are ´coming into view´ and frames the potential shift as comparable to the arrival of electricity or the internet. He attributes momentum to steady improvements in training methods, access to data, expanding compute, and falling costs that yield what he calls ´super-exponential´ socioeconomic value.

Forecasts from aggregated surveys reinforce that optimism while underlining uncertainty. One survey puts at least a 50 percent chance of systems reaching several AGI milestones by 2028. The probability assigned to unaided machines outperforming humans in every task is estimated at 10 percent by 2027 and rises to 50 percent by 2047 in some responses. Time horizons have shrunk notably, from multi-decade estimates around the launch of GPT-3 to single-digit years in late 2024. Ian Bratt of Arm emphasizes that large language and reasoning models are already reshaping industry practices, even as foundational gaps remain.

Beyond timelines, the discussion narrows to enablers: hardware scale, software advances, training data, compute economics, and the orchestration that ties them together. The future will depend on how those pieces combine, and on the social choices that govern deployment. This report was produced by Insights, the custom content arm of MIT Technology Review, and was created entirely by human writers, editors, analysts, and illustrators with any limited use of tools subject to human review; it was not written by MIT Technology Review’s editorial staff.

85

Impact Score

This AI won’t drain your battery

Google DeepMind´s Gemma 3 270M promises on-device Artificial Intelligence that uses almost no phone battery, while Sam Altman lays out OpenAI´s post-GPT-5 strategy.

###CFCACHE###

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.

Please check your email for a Verification Code sent to . Didn't get a code? Click here to resend