A trillion dollars is a terrible thing to waste

Gary Marcus argues that the machine learning mainstream's prolonged focus on scaling large language models may have cost roughly a trillion dollars and produced diminishing returns. He urges a pivot toward new ideas such as neurosymbolic techniques and built-in inductive constraints to address persistent problems.

Breaking coverage centers on a new interview with Ilya Sutskever in which he acknowledges that scaling through more chips and more data is flattening and that large language models generalize worse than people. Sutskever signals openness to neurosymbolic techniques and innateness, and he stops short of forecasting a bright future for pure large language models. Gary Marcus notes that this perspective echoes warnings he and others have made for years, citing his pre-GPT 2018 critique and subsequent writings about limits on deep learning and the Kaplan scaling laws.

Marcus frames the change as late to the mainstream and costly. He offers a back-of-the-envelope estimate of roughly a trillion dollars spent on the scaling detour, much of it on Nvidia chips and high salaries, and highlights the venture capital dynamics that encourage continued bets on scaling. Marcus calls out the structural incentives of a 2% management fee model for venture capitalists, which can make scaling an attractive, low-risk way for investors to profit even if outcomes disappoint. He and others warn of potential collateral damage if expectations for generative Artificial Intelligence do not materialize, including damaged returns for limited partners, pressure on banks and credit, and a possible macroeconomic correction if AI-related spending has been propping recent growth.

The piece synthesizes technical and economic critique: technical papers and researchers have documented persistent issues such as hallucinations, truth, generalization and reasoning despite larger models, while commentators in the economic press warn that much of the economy’s recent growth is premised on promised productivity gains from generative Artificial Intelligence. Marcus concludes that the field went all in on one path, excluding other disciplines such as cognitive science, and that the community now faces the cost of rediscovering lessons advocates of neurosymbolic and innate-constraint approaches have long argued. The central warning is practical: a trillion dollars is a terrible amount to have perhaps wasted if the core problems remain unsolved.

58

Impact Score

HPC won’t be an x86 monoculture forever

x86 dominance in high-performance computing is receding – its share of the TOP500 has fallen from almost nine in ten machines a decade ago to 57 percent today. The rise of GPUs, Arm and RISC-V and the demands of Artificial Intelligence and hyperscale workloads are reshaping processor choices.

experts divided over claim that Chinese hackers launched world-first Artificial Intelligence-powered cyber attack

Anthropic said in a Nov. 13 statement that engineers disrupted a ‘largely autonomous’ operation that used its Claude model to automate roughly 80-90% of reconnaissance and exploitation against 30 organizations worldwide. Experts dispute the degree of autonomy but warn even partial Artificial Intelligence-driven orchestration lowers barriers to espionage and increases scalability.

Seagate HAMR prototype achieves 6.9 TB per platter for 55 TB HDDs

Seagate disclosed a prototype heat-assisted magnetic recording platter that stores roughly 6.9 TB and enables drives with roughly 55 TB of capacity. The company says the technology would benefit data center cold tiers and workloads such as Artificial Intelligence.

Rapidus plans second Hokkaido plant, targets 1.4 nm production in early 2029

Rapidus reportedly plans to begin construction of a second factory in Hokkaido in 2027 and aims to start production of 1.4 nm chips in early 2029 as part of a trillion-yen initiative. A Rapidus spokesperson said the recent reports are speculation and that any roadmap updates will come directly from the company.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.