Have large language models plateaued

A Hacker News thread debates whether large language models have plateaued or whether recent gains come from better tooling and applications, with autonomous Artificial Intelligence agents showing striking demos and notable failures.

Commenters on a Hacker News thread disagree about whether large language models have reached a plateau. Some argue “the LLMs have reached a plateau” and expect only marginal gains from successive generations, while others point to significant recent improvements in both models and tooling over the last 6 months. The conversation positions innovation as shifting from raw model breakthroughs to novel uses, agent frameworks, and improved developer workflows.

Participants cite concrete demonstrations on both sides. Supporters of progress point to agent-driven tasks that search emails, refine queries, and infer buried information, plus demos such as alphaevolve and a Microsoft agentic test demo with Copilot running a browser and writing Playwright tests. Critics point to messy real-world results: autonomous agents failing in pull requests on the dotnet codebase, reiterating broken fixes, and live stage failures the speaker downplayed. The thread includes references to Claude 4, Claude 3.7, o3, Gemini Pro 2.5 and projects like aider to illustrate the range of recent advances and public showcases.

Practical developer experience reported in the discussion is mixed. Some say models are excellent for greenfield development and bounded tasks, but struggle with large existing code bases, especially when changes must span front end, API, business logic, data access, tests and infrastructure. One commenter wrote they have not seen an LLM consistently implement new features or refactors in a 100k+ LOC code base without producing messy, convention-ignoring changes. Workarounds include dumping a code base into Gemini for an architecture spec and using aider or Claude code for implementation, which “90% works 80% of the time.” Others warn that “Dumping it at Claude 3.7 with no instructions will 100% get random rewriting,” underscoring that gains often come from tooling, prompting, and system design rather than model-only progress.

55

Impact Score

Samsung’s 2 nm node progress could revive foundry business and attract Qualcomm

Samsung Foundry’s 2 nm SF2 process is reportedly stabilizing at around 50% yields, positioning the Exynos 2600 as a key proof of concept and potentially helping the chip division return to profit. New demand from Tesla Artificial Intelligence chips and possible deals with Qualcomm and AMD are seen as central to the turnaround.

How high quality sound shapes virtual communication and trust

As virtual meetings, classes, and content become routine, researchers and audio leaders argue that sound quality is now central to how we judge credibility, intelligence, and trust. Advances in Artificial Intelligence powered audio processing are making clear, unobtrusive sound both more critical and more accessible across work, education, and marketing.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.