Self-critique method lifts large language model planning performance

Researchers at Google DeepMind and collaborators show that intrinsic self-critique can significantly improve large language model planning on benchmarks like Blocksworld, Logistics, and Mini-grid without external verification tools.

Researchers from Google DeepMind and collaborators have introduced an intrinsic self-critique method that allows large language models to evaluate and refine their own plans, leading to substantial gains on standard planning benchmarks. The work targets long-standing limitations in planning and reasoning, and demonstrates that self-generated feedback can improve performance on Blocksworld, Logistics, and Mini-grid datasets without relying on external verification tools. The approach is positioned as a step toward more robust and self-improving artificial intelligence systems that can better handle complex planning tasks expressed in natural language.

The core of the method is an iterative loop where a large language model first proposes a plan, then critiques that plan by assessing correctness and providing justifications, and finally uses this feedback as contextual material for the next planning attempt. The researchers started with a few-shot learning setup and then progressively extended it to a many-shot regime, showing that substantial improvement is possible through iterative correction and refinement. Experiments utilized LLM model checkpoints from October 2024 as the basis for evaluation, establishing new state-of-the-art results on multiple planning benchmarks and demonstrating that the technique transfers across different model versions.

The team tested the method on planning problems of varying difficulty, including Blocksworld scenarios with 3-5 and 3-7 blocks, as well as standard Logistics and Mini-grid datasets, and reported consistently higher accuracies than strong existing baselines. The self-critique mechanism reduced false positives and improved error detection by aggregating past plans and critiques into a growing in-context history that the model could learn from without any parameter updates. In a key result, substantial gains were achieved across multiple datasets, with a new state-of-the-art result of 89.3% success rate on Blocksworld 3-5 when employing self-critique alongside self-consistency, and the research represents the first demonstration of LLMs solving Mystery Blocksworld problems with 22% accuracy, improving to 37.8% with the implemented self-improvement techniques. The authors note a limitation arising from context length, which required limiting iterative critique to ten steps, and suggest that combining this self-critique process with methods such as Chain-of-Thought or Monte-Carlo Tree Search on more capable models could further close the gap between language model planners and traditional algorithmic planners, especially in real-world, natural-language planning scenarios like holiday planning or meeting scheduling where classic systems often struggle.

58

Impact Score

Google Vids opens free video generation to all Google users

Google has made Google Vids available to anyone with a Google account, adding free access to video generation with its latest models. The move expands Google’s end-to-end video workflow and increases pressure on rivals that charge for similar tools.

Court warns against chatbot legal advice in Heppner case

A federal court found that chats with a publicly available generative Artificial Intelligence tool were not protected by attorney-client privilege or the work-product doctrine. The ruling highlights litigation risks when executives or employees use chatbots for legal guidance without lawyer supervision.

Newsom orders California to weigh Artificial Intelligence harms in contract rules

Gov. Gavin Newsom has signed an executive order directing California agencies to account for potential Artificial Intelligence harms in state contracting while expanding approved use of generative tools across government. The move follows a dispute involving Anthropic and reflects a broader split between California and the Trump administration on Artificial Intelligence oversight.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.