DeepSeek Unveils New Method for Scaling Reward Models with SPCT

DeepSeek AI reveals a novel approach to enhance the scalability of general reward models in Artificial Intelligence systems.

DeepSeek AI, a leader in the large language model field, has unveiled a novel technique to enhance the scalability of general reward models (GRMs) during the inference phase. The newly introduced method, documented in their recent research paper, is aimed at optimizing reward generation by dynamically producing principles and critiques, utilizing rejection fine-tuning and rule-based online reinforcement learning.

At a time when the focus on scaling large language models has shifted to the inference phase, DeepSeek´s new method aligns with emerging models like OpenAI’s o1, which prioritize enhanced reinforcement learning during model testing. This reflects a growing trend toward leveraging reinforcement learning to continuously improve model performance by refining reasoning processes and enhancing decision-making capabilities.

DeepSeek´s SPCT approach addresses the challenge of scaling reinforcement learning for large language models by introducing Self-Principled Critique Tuning during inference. This involves rejection fine-tuning and rule-based online reinforcement learning, enhancing both the scalability and quality of GRMs. Experimental results demonstrate the superiority of SPCT over existing methods, setting the stage for further releases, including the anticipated R2 model from DeepSeek.

70

Impact Score

Adaptive training method boosts reasoning large language model efficiency

Researchers have developed an adaptive training system that uses idle processors to train a smaller helper model on the fly, doubling reasoning large language model training speed without sacrificing accuracy. The method aims to cut costs and energy use for advanced applications such as financial forecasting and power grid risk detection.

How to run MiniMax M2.5 locally with Unsloth GGUF

MiniMax-M2.5 is a new open large language model optimized for coding, tool use, search, and office tasks, and Unsloth provides quantized GGUF builds and usage recipes for running it locally. The guide focuses on memory requirements, recommended decoding parameters, and deployment via llama.cpp and llama-server with an OpenAI-compatible interface.

Y Combinator backs new wave of computer vision startups in 2026

Y Combinator’s 2026 computer vision cohort spans infrastructure, developer tools, and industry-specific applications from retail security to aquaculture and healthcare. Startups are increasingly pairing computer vision with large vision language models and foundation models to tackle real-time video, automation, and domain-specific analysis.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.