The road to artificial general intelligence

Founders and researchers debate whether current advances will yield Artificial Intelligence that matches human flexibility, and which hardware, software, and orchestration steps would make that possible.

Contemporary models can discover drugs and write code, yet they stumble on puzzles a lay person solves in minutes. That gap sits at the heart of the effort to produce artificial general intelligence, systems that rival human versatility across domains. The tension is practical and conceptual: breakthroughs in narrow tasks coexist with glaring failures in simple reasoning, and that duality shapes expectations about when and how generality might arrive.

Industry figures place radically different timelines on the horizon. Dario Amodei, co-founder of Anthropic, predicts the emergence of ´powerful artificial intelligence´ as early as 2026, describing capabilities such as Nobel prize level domain expertise, seamless switching between text, audio and the physical world, and autonomous goal-directed reasoning rather than mere prompt response. Sam Altman, chief executive of OpenAI, says AGI-like properties are ´coming into view´ and frames the potential shift as comparable to the arrival of electricity or the internet. He attributes momentum to steady improvements in training methods, access to data, expanding compute, and falling costs that yield what he calls ´super-exponential´ socioeconomic value.

Forecasts from aggregated surveys reinforce that optimism while underlining uncertainty. One survey puts at least a 50 percent chance of systems reaching several AGI milestones by 2028. The probability assigned to unaided machines outperforming humans in every task is estimated at 10 percent by 2027 and rises to 50 percent by 2047 in some responses. Time horizons have shrunk notably, from multi-decade estimates around the launch of GPT-3 to single-digit years in late 2024. Ian Bratt of Arm emphasizes that large language and reasoning models are already reshaping industry practices, even as foundational gaps remain.

Beyond timelines, the discussion narrows to enablers: hardware scale, software advances, training data, compute economics, and the orchestration that ties them together. The future will depend on how those pieces combine, and on the social choices that govern deployment. This report was produced by Insights, the custom content arm of MIT Technology Review, and was created entirely by human writers, editors, analysts, and illustrators with any limited use of tools subject to human review; it was not written by MIT Technology Review’s editorial staff.

85

Impact Score

How Artificial Intelligence is reshaping financial services oversight

Financial services regulators are largely treating Artificial Intelligence as another technology governed by existing rules rather than building new securities-specific frameworks. History suggests that clearer expectations will emerge through examinations, enforcement, and supervisory guidance.

Nvidia faces gamer backlash over Artificial Intelligence shift

Nvidia is facing growing frustration from gamers as memory supply is steered toward data center chips and DLSS 5 becomes more central to game performance. The dispute highlights how far the company’s priorities have shifted toward enterprise Artificial Intelligence.

Executives see limited Artificial Intelligence productivity gains so far

Corporate enthusiasm around Artificial Intelligence has yet to translate into broad gains in employment or productivity, reviving comparisons to the long lag between early computing breakthroughs and measurable economic impact. Recent surveys and studies show mixed results, with strong expectations for future benefits but little consensus on present gains.

Nvidia skips a new GeForce generation as Artificial Intelligence chips dominate

Nvidia is set to go a year without a new GeForce GPU generation for the first time since the 1990s as memory shortages and higher margins in Artificial Intelligence hardware reshape the market. AMD and Intel are also struggling to capitalize because the same supply constraints are hitting gaming products across the industry.

Where gpu debt starts to break

Stress in gpu-backed infrastructure financing is emerging around deals that lack the structural protections seen in the strongest transactions. Oracle, the Abilene Stargate project, and older CoreWeave debt illustrate different ways residual risk can surface when contracts, collateral, and counterparties fall short.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.