Breaking coverage centers on a new interview with Ilya Sutskever in which he acknowledges that scaling through more chips and more data is flattening and that large language models generalize worse than people. Sutskever signals openness to neurosymbolic techniques and innateness, and he stops short of forecasting a bright future for pure large language models. Gary Marcus notes that this perspective echoes warnings he and others have made for years, citing his pre-GPT 2018 critique and subsequent writings about limits on deep learning and the Kaplan scaling laws.
Marcus frames the change as late to the mainstream and costly. He offers a back-of-the-envelope estimate of roughly a trillion dollars spent on the scaling detour, much of it on Nvidia chips and high salaries, and highlights the venture capital dynamics that encourage continued bets on scaling. Marcus calls out the structural incentives of a 2% management fee model for venture capitalists, which can make scaling an attractive, low-risk way for investors to profit even if outcomes disappoint. He and others warn of potential collateral damage if expectations for generative Artificial Intelligence do not materialize, including damaged returns for limited partners, pressure on banks and credit, and a possible macroeconomic correction if AI-related spending has been propping recent growth.
The piece synthesizes technical and economic critique: technical papers and researchers have documented persistent issues such as hallucinations, truth, generalization and reasoning despite larger models, while commentators in the economic press warn that much of the economy’s recent growth is premised on promised productivity gains from generative Artificial Intelligence. Marcus concludes that the field went all in on one path, excluding other disciplines such as cognitive science, and that the community now faces the cost of rediscovering lessons advocates of neurosymbolic and innate-constraint approaches have long argued. The central warning is practical: a trillion dollars is a terrible amount to have perhaps wasted if the core problems remain unsolved.
