Self-critique method lifts large language model planning performance

Researchers at Google DeepMind and collaborators show that intrinsic self-critique can significantly improve large language model planning on benchmarks like Blocksworld, Logistics, and Mini-grid without external verification tools.

Researchers from Google DeepMind and collaborators have introduced an intrinsic self-critique method that allows large language models to evaluate and refine their own plans, leading to substantial gains on standard planning benchmarks. The work targets long-standing limitations in planning and reasoning, and demonstrates that self-generated feedback can improve performance on Blocksworld, Logistics, and Mini-grid datasets without relying on external verification tools. The approach is positioned as a step toward more robust and self-improving artificial intelligence systems that can better handle complex planning tasks expressed in natural language.

The core of the method is an iterative loop where a large language model first proposes a plan, then critiques that plan by assessing correctness and providing justifications, and finally uses this feedback as contextual material for the next planning attempt. The researchers started with a few-shot learning setup and then progressively extended it to a many-shot regime, showing that substantial improvement is possible through iterative correction and refinement. Experiments utilized LLM model checkpoints from October 2024 as the basis for evaluation, establishing new state-of-the-art results on multiple planning benchmarks and demonstrating that the technique transfers across different model versions.

The team tested the method on planning problems of varying difficulty, including Blocksworld scenarios with 3-5 and 3-7 blocks, as well as standard Logistics and Mini-grid datasets, and reported consistently higher accuracies than strong existing baselines. The self-critique mechanism reduced false positives and improved error detection by aggregating past plans and critiques into a growing in-context history that the model could learn from without any parameter updates. In a key result, substantial gains were achieved across multiple datasets, with a new state-of-the-art result of 89.3% success rate on Blocksworld 3-5 when employing self-critique alongside self-consistency, and the research represents the first demonstration of LLMs solving Mystery Blocksworld problems with 22% accuracy, improving to 37.8% with the implemented self-improvement techniques. The authors note a limitation arising from context length, which required limiting iterative critique to ten steps, and suggest that combining this self-critique process with methods such as Chain-of-Thought or Monte-Carlo Tree Search on more capable models could further close the gap between language model planners and traditional algorithmic planners, especially in real-world, natural-language planning scenarios like holiday planning or meeting scheduling where classic systems often struggle.

58

Impact Score

Navigating artificial intelligence regulation in UK healthcare

Regulatory expert Sam Bacon outlines how United Kingdom healthcare innovators can navigate evolving rules for artificial intelligence medical devices, from risk classification to real world testing and NHS integration. The guidance stresses early engagement with standards, clinicians and sandboxes to build safe, trusted and adoption ready technologies.

Nvidia debuts DLSS 4.5 to boost image quality and 240 Hz path tracing

Nvidia has introduced DLSS 4.5 at CES 2026, a major upgrade focused on higher quality upscaling and smoother high refresh rate gaming, especially for path traced titles. The update centers on a new transformer model and expanded hardware support across recent GeForce RTX generations.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.