Have large language models plateaued

A Hacker News thread debates whether large language models have plateaued or whether recent gains come from better tooling and applications, with autonomous Artificial Intelligence agents showing striking demos and notable failures.

Commenters on a Hacker News thread disagree about whether large language models have reached a plateau. Some argue “the LLMs have reached a plateau” and expect only marginal gains from successive generations, while others point to significant recent improvements in both models and tooling over the last 6 months. The conversation positions innovation as shifting from raw model breakthroughs to novel uses, agent frameworks, and improved developer workflows.

Participants cite concrete demonstrations on both sides. Supporters of progress point to agent-driven tasks that search emails, refine queries, and infer buried information, plus demos such as alphaevolve and a Microsoft agentic test demo with Copilot running a browser and writing Playwright tests. Critics point to messy real-world results: autonomous agents failing in pull requests on the dotnet codebase, reiterating broken fixes, and live stage failures the speaker downplayed. The thread includes references to Claude 4, Claude 3.7, o3, Gemini Pro 2.5 and projects like aider to illustrate the range of recent advances and public showcases.

Practical developer experience reported in the discussion is mixed. Some say models are excellent for greenfield development and bounded tasks, but struggle with large existing code bases, especially when changes must span front end, API, business logic, data access, tests and infrastructure. One commenter wrote they have not seen an LLM consistently implement new features or refactors in a 100k+ LOC code base without producing messy, convention-ignoring changes. Workarounds include dumping a code base into Gemini for an architecture spec and using aider or Claude code for implementation, which “90% works 80% of the time.” Others warn that “Dumping it at Claude 3.7 with no instructions will 100% get random rewriting,” underscoring that gains often come from tooling, prompting, and system design rather than model-only progress.

55

Impact Score

China eyes chip-stacking to narrow gap with NVIDIA

Wei Shaojun said China could narrow its technology gap with NVIDIA by stacking 14 nm logic chips with 18 nm DRAM and new compute architectures. The approach is aimed at improving Artificial Intelligence performance and energy efficiency while relying on a fully domestic supply chain.

Pat Gelsinger’s xLight gets tentative U.S. support for EUV FELs

The U.S. Department of Commerce has signed a non-binding letter of intent to support xLight, a venture-backed startup focused on EUV Free Electron Lasers, under the CHIPS and Science Act, paving the way for (up to) NULL million in government funding. The company, which added Pat Gelsinger as executive chairman, plans to build its first system at the Albany Nanotech Complex.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.