Intel and SoftBank detail HB3DM memory for Artificial Intelligence accelerators

Intel and SoftBank subsidiary Saimemory are advancing HB3DM, a stacked memory design built on Z-Angle Memory technology for Artificial Intelligence accelerators. The approach targets significantly higher bandwidth than HBM4, though with lower capacity in its first generation.

Intel and SoftBank, through their subsidiary Saimemory, have been developing an alternative to high-bandwidth memory for modules used with powerful Artificial Intelligence accelerators. Saimemory is scheduled to present a paper at VLSI 2026 in June on HB3DM memory, which is based on Z-Angle Memory technology. The design uses vertical stacking along the Z-axis in a way that resembles traditional HBM, while aiming to deliver stronger results through advanced manufacturing methods.

The first generation of HB3DM will feature a total of nine layers, stacked using a hybrid bonding technique for 3D chip placement. At the base will be a logic layer that manages data movement within the chip, with eight DRAM layers on top for data storage. Each layer will include about 13,700 TSVs for hybrid bonding. In terms of capacity, HB3DM will offer about 1.125 GB per layer, translating to 10 GB per memory module.

Intel can achieve approximately 0.25 Tb/s of memory bandwidth per mm², and for a 10 GB module with a 171 mm² die area, we can expect around 5.3 TB/s per module. Those specifications position HB3DM as a potential bandwidth leader against HBM4 memory. HBM4 provides speeds of around 2 TB/s per stack, less than half of what HB3DM will deliver.

Capacity remains the main tradeoff in the current design. However, HB3DM is limited by capacity, with only 10 GB available, whereas HBM4 can reach up to 48 GB per stack. Intel may increase the number of layers in production as HB3DM progresses, but for now, the technology stands out primarily for bandwidth rather than density.

64

Impact Score

Generative Artificial Intelligence is reshaping cybercrime less than feared

Research into criminal underground forums suggests generative Artificial Intelligence is being used mainly as a productivity tool rather than a transformative criminal breakthrough. The biggest near-term risks may come from automation, fraud support, and attackers adapting content to influence chatbot outputs.

Samsung strike threat raises chip supply risks

A possible labor strike at Samsung Electronics in South Korea is raising concerns about chip production disruptions, client defections, and pressure on its position in the global semiconductor race. The dispute centers on bonus rules, but the larger risk is damage to Samsung’s credibility as a reliable supplier for major tech customers.

Microsoft previews Shader Model 6.10 for gpu Artificial Intelligence engines

Microsoft has introduced Shader Model 6.10 in AgilitySDK 1.720-preview with a new matrix API designed to unify access to dedicated gpu Artificial Intelligence hardware from AMD, Intel, and NVIDIA. The change is aimed at making neural rendering features easier to deploy across multiple vendors with a single programming model.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.