At Nvidia’s GTC developer event, Google DeepMind chief scientist Jeff Dean and Nvidia chief scientist Bill Dally described a next phase for Artificial Intelligence defined by more autonomous agents, more adaptive large language models, and tools built for machine-speed operation. The discussion framed recent progress as unusually fast, with models moving from solving basic math tasks to achieving elite performance in mathematics and coding competitions. Dean said that three or four years ago, researchers were excited when models could solve eighth-grade math problems, while by last year Google’s Gemini had reached the gold-medal standard at the International Mathematical Olympiad and had also won a variety of coding contests.
A major focus was the rise of agents that can complete work with little or no human supervision. Dean pointed to the arrival earlier this year of OpenClaw as an early sign of how unsupervised agents could function, but said the current computing pipeline, including chips, power requirements, communications, and cost, remains a limiting factor. Dally said Nvidia is working on faster agent infrastructure, including data transfer through optical networking technologies. Dean also highlighted the prospect of self-improving agents. He said agents are not yet creating full new versions of themselves, but they are already showing elements of self-evolution by accepting and dismissing ideas. He connected that trend to work from 2017 on meta learning, where systems searched for models suited to experiments and problem solving, a process that can now be guided with natural language rather than only code.
Dean said future large language models will likely become more interactive with the real world by updating themselves in real time and combining physical and digital information as they learn. He argued that current models are still largely fixed after training, while future systems will learn on the fly and use that knowledge to guide robotic actions and improve prediction. He also said continual-learning models are beginning to emerge without fixed parameters, with models growing, pruning, and compressing parameters organically. In chip design, the next step is greater automation through a master agent that coordinates specialized sub-agents to build functions, fix bugs, negotiate improvements, and iterate on results. Both panelists also said current development tools are too slow for agentic systems and will need to be redesigned for machine-speed reasoning and action, especially in coding, document work, and cybersecurity defense.
The panel also pointed to education as a major application area. Dally criticized university restrictions on Artificial Intelligence in the classroom and argued that educators should use the technology to accelerate learning. Dean said models are on track to become highly personalized tutors that help students understand concepts efficiently without simply handing over answers. He compared that role to the way calculators removed bottlenecks in math learning and helped students move more quickly to advanced work. Together, the speakers described a future in which human experts and increasingly capable agents work as partners across research, engineering, and learning.
