SK Hynix debuts 16 layer 48 GB HBM4 memory for artificial intelligence and hpc

SK Hynix has introduced a 16 high 48 GB HBM4 memory module for artificial intelligence and high performance computing accelerators and previewed a customizable cHBM design that shifts logic functions into the memory stack.

SK Hynix has showcased its most advanced high bandwidth memory, a 16 high 48 GB HBM4 module aimed at high capacity artificial intelligence and high performance computing accelerators that need large amounts of memory. This marks the first time SK Hynix has demonstrated a module that is more advanced than its earlier 12 high 36 GB HBM4, which used to run at 11.7 Gbps. For the new 16 high HBM4, there is no official data or speed reference yet, but the company presentation suggests that the four additional DRAM layers are intended to provide higher bandwidth for next generation accelerators.

The description of the 16 high HBM4 remained deliberately limited, which may reflect a competitive environment where Micron and Samsung are still refining their own HBM4 modules before supplying them to chipmakers such as AMD and NVIDIA. The wording of the announcement indicates that there is room for potential increases in speed if needed, signaling that performance targets may still be adjustable as ecosystem requirements become clearer. The focus for now is on capacity scaling and stack height rather than confirmed throughput figures.

Alongside the new HBM4 stack, SK Hynix also presented a custom base die HBM concept called cHBM. This design places a customized base die at the bottom of the DRAM stack that incorporates logic functions typically found on the GPU or ASIC logic die instead of in the memory itself. In SK Hynix’s demonstration, this includes features such as die to die PHYs, embedded memory controllers, HBM PHY, and related control logic, which can free up area on the GPU die for more compute logic and higher performance. The company emphasizes that customers can configure this base die to their needs and can even embed processing logic directly inside the die area to tailor memory subsystems more tightly to their accelerators.

65

Impact Score

Siemens debuts digital twin composer for industrial metaverse deployments

Siemens has introduced digital twin composer, a software tool that builds industrial metaverse environments at scale by merging comprehensive digital twins with real-time physical data, enabling faster virtual decision making. Early deployments with PepsiCo report higher throughput, shorter design cycles and reduced capital expenditure through physics-accurate simulations and artificial intelligence driven optimization.

Cadence builds chiplet partner ecosystem for physical artificial intelligence and data center designs

Cadence has introduced a Chiplet Spec-to-Packaged Parts ecosystem aimed at simplifying chiplet design for physical artificial intelligence, data center and high performance computing workloads, backed by a roster of intellectual property and foundry partners. The program centers on a physical artificial intelligence chiplet platform and framework that integrates prevalidated components to cut risk and speed commercial deployment.

Patch notes detail split compute and IO tiles in Intel Diamond Rapids Xeon 7

Linux kernel patch notes reveal that Intel’s upcoming Diamond Rapids Xeon 7 server processors separate compute and IO tiles and adopt new performance monitoring and PCIe 6.0 support. The changes point to a more modular architecture and a streamlined product stack focused on 16-channel memory configurations.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.