Intel, SoftBank, and University of Tokyo launch Saimemory to develop HBM alternative for artificial intelligence accelerators

Intel, SoftBank, and University of Tokyo are forming Saimemory, a new startup aiming to reinvent memory for artificial intelligence accelerators by challenging high-bandwidth memory standards.

As demand for artificial intelligence accelerators continues its rapid ascent, pressure mounts on memory suppliers to deliver solutions capable of supporting faster training and increased token throughput. In response to this market force, Intel, SoftBank, and University of Tokyo have quietly established a startup named ´Saimemory´ targeting the creation of a new high-bandwidth memory technology. This initiative, centered on innovative stacked DRAM designs, leverages Intel´s depth in chip engineering and University of Tokyo´s cutting-edge memory patents. SoftBank has made a substantial funding commitment, reportedly close to ¥3 billion, to drive the venture forward, and other organizations such as Riken Research Institute and Shinko Electric Industries are considering participation as investors or technology partners. The collaboration also seeks government backing to push for faster development cycles.

Current high-bandwidth memory, or HBM, technologies utilize through-silicon vias (TSVs) to interconnect multiple DRAM layers, enabled by a wide-bus interposer that supports staggering data rates above 1 TB/s. Saimemory´s proposed architecture introduces novel approaches in signal routing and refresh management, aiming to deliver gains in energy efficiency, lower latency, and overall performance. The landscape, however, is marked by earlier failed attempts to upend the status quo. The Hybrid Memory Cube project, launched in 2011 by Samsung and Micron, promised significant speed improvements but ultimately fell short, discontinued in 2018 due to lack of industry adoption. This history highlights the formidable challenge involved in dislodging well-entrenched memory standards such as HBM.

If Saimemory´s new approach proves successful, Intel is well positioned to be the first to adopt the technology for its next-generation artificial intelligence accelerators. The startup may also pitch trial chips to competitors like AMD and NVIDIA, depending on early results and industry interest. Yet, mass adoption hinges on translating technical promise into real-world manufacturability and yield at scale. The ambitious timeline targets prototype chips in 2027 and commercial volume in 2030, reflecting both the opportunity and hurdles in fundamentally rethinking memory architectures tailored for artificial intelligence workloads.

76

Impact Score

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.

Please check your email for a Verification Code sent to . Didn't get a code? Click here to resend