Micron unveils 256 GB low power server memory module for artificial intelligence data centers

Micron has begun shipping customer samples of a 256 GB low power server memory module built on a 32 Gb LPDDR5X design, targeting next generation artificial intelligence data centers. The module is designed to address escalating memory demands from modern artificial intelligence workloads and evolving data center architectures.

Micron Technology has started shipping customer samples of a 256 GB SOCAMM2 low power dynamic random access memory module that is described as the industry’s highest capacity LPDRAM offering for server environments. The new module is enabled by what Micron identifies as the industry’s first monolithic 32 Gb LPDDR5X design, positioning the product as a key step toward higher density, lower power memory in data center systems that are optimized for artificial intelligence workloads.

The company frames the introduction of the 256 GB SOCAMM2 module as a transformational development for artificial intelligence data centers that need to balance performance with power efficiency. According to Micron, this low power memory capacity is intended to unlock new system architectures by allowing servers to host larger working data sets and models while managing energy consumption more effectively, which is critical in large scale deployments.

Micron links the product to a broader shift in data center requirements driven by the convergence of artificial intelligence training, inference, agentic artificial intelligence and general purpose compute. The company notes that modern artificial intelligence workloads drive large model parameters, expansive context windows and persistent key value caches, and that core compute continues to scale in data intensity, concurrency and memory footprint, which together are reshaping data center system architectures and increasing demand for high capacity, low power memory solutions.

56

Impact Score

JEDEC outlines LPDDR6 expansion for data centers

JEDEC has previewed planned updates to LPDDR6 aimed at pushing the memory standard beyond mobile devices and into selected data center and accelerated computing use cases. The roadmap includes higher-capacity packaging options, flexible metadata support, 512 GB densities, and a new SOCAMM2 module standard.

Tsmc debuts A13 process technology

Tsmc has introduced its A13 process at its 2026 North America Technology Symposium as a tighter version of A14 aimed at next-generation Artificial Intelligence, high performance computing, and mobile designs. The company positions the node as a more compact and efficient option with backward-compatible design rules for faster migration.

Google unveils eighth-generation tensor processor units

Google introduced its eighth generation of custom tensor processor units with separate designs for training and inference. The new TPU 8t and TPU 8i are aimed at large-scale model training, serving, and agentic workloads.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.