Self-critique method lifts large language model planning performance

Researchers at Google DeepMind and collaborators show that intrinsic self-critique can significantly improve large language model planning on benchmarks like Blocksworld, Logistics, and Mini-grid without external verification tools.

Researchers from Google DeepMind and collaborators have introduced an intrinsic self-critique method that allows large language models to evaluate and refine their own plans, leading to substantial gains on standard planning benchmarks. The work targets long-standing limitations in planning and reasoning, and demonstrates that self-generated feedback can improve performance on Blocksworld, Logistics, and Mini-grid datasets without relying on external verification tools. The approach is positioned as a step toward more robust and self-improving artificial intelligence systems that can better handle complex planning tasks expressed in natural language.

The core of the method is an iterative loop where a large language model first proposes a plan, then critiques that plan by assessing correctness and providing justifications, and finally uses this feedback as contextual material for the next planning attempt. The researchers started with a few-shot learning setup and then progressively extended it to a many-shot regime, showing that substantial improvement is possible through iterative correction and refinement. Experiments utilized LLM model checkpoints from October 2024 as the basis for evaluation, establishing new state-of-the-art results on multiple planning benchmarks and demonstrating that the technique transfers across different model versions.

The team tested the method on planning problems of varying difficulty, including Blocksworld scenarios with 3-5 and 3-7 blocks, as well as standard Logistics and Mini-grid datasets, and reported consistently higher accuracies than strong existing baselines. The self-critique mechanism reduced false positives and improved error detection by aggregating past plans and critiques into a growing in-context history that the model could learn from without any parameter updates. In a key result, substantial gains were achieved across multiple datasets, with a new state-of-the-art result of 89.3% success rate on Blocksworld 3-5 when employing self-critique alongside self-consistency, and the research represents the first demonstration of LLMs solving Mystery Blocksworld problems with 22% accuracy, improving to 37.8% with the implemented self-improvement techniques. The authors note a limitation arising from context length, which required limiting iterative critique to ten steps, and suggest that combining this self-critique process with methods such as Chain-of-Thought or Monte-Carlo Tree Search on more capable models could further close the gap between language model planners and traditional algorithmic planners, especially in real-world, natural-language planning scenarios like holiday planning or meeting scheduling where classic systems often struggle.

58

Impact Score

Why multimodal content pipelines are reshaping media production

Multimodal content creation pipelines are consolidating text, image, and audio workflows into integrated systems that compress production timelines and expand monetization options, while raising fresh legal and ethical challenges. The article examines the tools, economics, and skills driving this shift for tens of millions of creators.

Semiconductor coverage tracks geopolitics, telecom chips and Artificial Intelligence demand

Light Reading’s semiconductor section brings together coverage of geopolitical risks in chip supply, telecom silicon shakeups and surging Artificial Intelligence infrastructure demand, with a strong focus on how these forces reshape vendors such as Intel, Nvidia, Qualcomm, Samsung and Nokia. The stream highlights how shifts in rare earths policy, network silicon strategy and massive memory orders are redefining the broader communications and computing ecosystem.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.