Samsung completes hbm4 development, awaits NVIDIA approval

Samsung says it has cleared Production Readiness Approval for its first sixth-generation hbm (hbm4) and has shipped samples to NVIDIA for evaluation. Initial samples have exceeded NVIDIA's next-gen GPU requirement of 11 Gbps per pin and hbm4 promises roughly 60% higher bandwidth than hbm3e.

Samsung has reportedly finished development of its first sixth-generation hbm (hbm4) and cleared Production Readiness Approval (PRA), the final internal checkpoint before full-volume manufacturing. According to reporting from AjuNews and Korean industry sources, the company has already shipped hbm4 samples to NVIDIA for evaluation on its upcoming rubin platform. Initial units have exceeded NVIDIA’s next-gen GPU requirement of 11 Gbps per pin, and Samsung expects hbm4 to deliver roughly 60% higher bandwidth than current hbm3e parts.

The design pairs improved 1c-class DRAM with a 4 nm logic base die, a combination Samsung says helps manage thermals and power at the increased speeds while closing the gap with rivals. With PRA cleared, Samsung’s device solutions division can move straight into volume production once NVIDIA signs off, and sources indicate manufacturing lines are already prepared to ramp. The company first flagged progress during its Q3 2025 earnings call on October 30, noting that hbm4 samples were in the hands of global customers and that mass production is planned for 2026. During that call Samsung stated, “HBM3E is currently in mass production and being sold to all related customers, while HBM4 samples are simultaneously being shipped to key clients.”

Samsung also confirmed that its foundry arm will prioritize stable 2 nm GAA output and hbm4 base-die production in 2026, alongside the ramp of its new Taylor, Texas fab. Separately, the company is reported to be developing a faster hbm4 variant that targets another 40% performance uplift, with an announcement possible as early as mid-February 2026. If validated by customers such as NVIDIA, the ready production lines and roadmap could accelerate supply of next-generation memory for high-performance compute and graphics platforms.

58

Impact Score

Microsoft unveils Maia 200 artificial intelligence inference accelerator

Microsoft has introduced Maia 200, a custom artificial intelligence inference accelerator built on a 3 nm process and designed to improve the economics of token generation for large models, including GPT-5.2. The chip targets higher performance per dollar for services like Microsoft Foundry and Microsoft 365 Copilot while supporting synthetic data pipelines for next generation models.

Samsung’s 2 nm node progress could revive foundry business and attract Qualcomm

Samsung Foundry’s 2 nm SF2 process is reportedly stabilizing at around 50% yields, positioning the Exynos 2600 as a key proof of concept and potentially helping the chip division return to profit. New demand from Tesla Artificial Intelligence chips and possible deals with Qualcomm and AMD are seen as central to the turnaround.

How high quality sound shapes virtual communication and trust

As virtual meetings, classes, and content become routine, researchers and audio leaders argue that sound quality is now central to how we judge credibility, intelligence, and trust. Advances in Artificial Intelligence powered audio processing are making clear, unobtrusive sound both more critical and more accessible across work, education, and marketing.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.