DeepSeek Unveils New Method for Scaling Reward Models with SPCT

DeepSeek AI reveals a novel approach to enhance the scalability of general reward models in Artificial Intelligence systems.

DeepSeek AI, a leader in the large language model field, has unveiled a novel technique to enhance the scalability of general reward models (GRMs) during the inference phase. The newly introduced method, documented in their recent research paper, is aimed at optimizing reward generation by dynamically producing principles and critiques, utilizing rejection fine-tuning and rule-based online reinforcement learning.

At a time when the focus on scaling large language models has shifted to the inference phase, DeepSeek´s new method aligns with emerging models like OpenAI’s o1, which prioritize enhanced reinforcement learning during model testing. This reflects a growing trend toward leveraging reinforcement learning to continuously improve model performance by refining reasoning processes and enhancing decision-making capabilities.

DeepSeek´s SPCT approach addresses the challenge of scaling reinforcement learning for large language models by introducing Self-Principled Critique Tuning during inference. This involves rejection fine-tuning and rule-based online reinforcement learning, enhancing both the scalability and quality of GRMs. Experimental results demonstrate the superiority of SPCT over existing methods, setting the stage for further releases, including the anticipated R2 model from DeepSeek.

70

Impact Score

Samsung starts sampling 3 GB GDDR7 running at 36 Gbps

Samsung has begun sampling its fastest-ever GDDR7 memory at 36 Gbps in 24 Gb dies that translate to 3 GB per chip, and it is also mass producing 28.0 Gbps 3 GB modules reportedly aimed at a mid-cycle NVIDIA refresh.

FLUX.2 image generation models now released, optimized for NVIDIA RTX GPUs

Black Forest Labs, the frontier Artificial Intelligence research lab, released the FLUX.2 family of visual generative models with new multi-reference and pose control tools and direct ComfyUI support. NVIDIA collaboration brings FP8 quantizations that reduce VRAM requirements by 40% and improve performance by 40%.

Aligning VMware migration with business continuity

Business continuity planning long focused on physical disasters, but cyber incidents, particularly ransomware, are now more common and often more damaging. In a survey of more than 500 CISOs, almost three-quarters (72%) said their organization had dealt with ransomware in the previous year.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.