Commenters on a Hacker News thread disagree about whether large language models have reached a plateau. Some argue “the LLMs have reached a plateau” and expect only marginal gains from successive generations, while others point to significant recent improvements in both models and tooling over the last 6 months. The conversation positions innovation as shifting from raw model breakthroughs to novel uses, agent frameworks, and improved developer workflows.
Participants cite concrete demonstrations on both sides. Supporters of progress point to agent-driven tasks that search emails, refine queries, and infer buried information, plus demos such as alphaevolve and a Microsoft agentic test demo with Copilot running a browser and writing Playwright tests. Critics point to messy real-world results: autonomous agents failing in pull requests on the dotnet codebase, reiterating broken fixes, and live stage failures the speaker downplayed. The thread includes references to Claude 4, Claude 3.7, o3, Gemini Pro 2.5 and projects like aider to illustrate the range of recent advances and public showcases.
Practical developer experience reported in the discussion is mixed. Some say models are excellent for greenfield development and bounded tasks, but struggle with large existing code bases, especially when changes must span front end, API, business logic, data access, tests and infrastructure. One commenter wrote they have not seen an LLM consistently implement new features or refactors in a 100k+ LOC code base without producing messy, convention-ignoring changes. Workarounds include dumping a code base into Gemini for an architecture spec and using aider or Claude code for implementation, which “90% works 80% of the time.” Others warn that “Dumping it at Claude 3.7 with no instructions will 100% get random rewriting,” underscoring that gains often come from tooling, prompting, and system design rather than model-only progress.