Samsung completes hbm4 development, awaits NVIDIA approval

Samsung says it has cleared Production Readiness Approval for its first sixth-generation hbm (hbm4) and has shipped samples to NVIDIA for evaluation. Initial samples have exceeded NVIDIA's next-gen GPU requirement of 11 Gbps per pin and hbm4 promises roughly 60% higher bandwidth than hbm3e.

Samsung has reportedly finished development of its first sixth-generation hbm (hbm4) and cleared Production Readiness Approval (PRA), the final internal checkpoint before full-volume manufacturing. According to reporting from AjuNews and Korean industry sources, the company has already shipped hbm4 samples to NVIDIA for evaluation on its upcoming rubin platform. Initial units have exceeded NVIDIA’s next-gen GPU requirement of 11 Gbps per pin, and Samsung expects hbm4 to deliver roughly 60% higher bandwidth than current hbm3e parts.

The design pairs improved 1c-class DRAM with a 4 nm logic base die, a combination Samsung says helps manage thermals and power at the increased speeds while closing the gap with rivals. With PRA cleared, Samsung’s device solutions division can move straight into volume production once NVIDIA signs off, and sources indicate manufacturing lines are already prepared to ramp. The company first flagged progress during its Q3 2025 earnings call on October 30, noting that hbm4 samples were in the hands of global customers and that mass production is planned for 2026. During that call Samsung stated, “HBM3E is currently in mass production and being sold to all related customers, while HBM4 samples are simultaneously being shipped to key clients.”

Samsung also confirmed that its foundry arm will prioritize stable 2 nm GAA output and hbm4 base-die production in 2026, alongside the ramp of its new Taylor, Texas fab. Separately, the company is reported to be developing a faster hbm4 variant that targets another 40% performance uplift, with an announcement possible as early as mid-February 2026. If validated by customers such as NVIDIA, the ready production lines and roadmap could accelerate supply of next-generation memory for high-performance compute and graphics platforms.

58

Impact Score

NVIDIA and AWS expand full-stack partnership for Artificial Intelligence compute platform

NVIDIA and AWS expanded integration around Artificial Intelligence infrastructure at AWS re:Invent, announcing support for NVIDIA NVLink Fusion with Trainium4, Graviton and the Nitro System. the move aims to unify NVIDIA scale-up interconnect and MGX rack architecture with AWS custom silicon to speed cloud-scale Artificial Intelligence deployments.

the state of artificial intelligence and DeepSeek strikes again

the download highlights a new MIT Technology Review and Financial Times feature on the uneven economic effects of Artificial Intelligence and a roundup of major technology items, including DeepSeek’s latest model claims and an Amsterdam welfare Artificial Intelligence investigation.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.