Marvell introduces industry´s first 2 nm custom SRAM for next-gen data centers

Marvell launches a 2 nm custom SRAM, targeting accelerated infrastructure for cloud and Artificial Intelligence applications.

Marvell Technology, Inc. has unveiled the industry´s first 2 nanometer custom Static Random Access Memory (SRAM), aiming to advance the performance profile of data-intensive devices such as custom XPUs for cloud data centers and Artificial Intelligence clusters. The new SRAM leverages Marvell´s own custom circuitry and software alongside leading-edge 2 nm process technology, resulting in memory chips that achieve up to 6 gigabits of high-speed data storage. A critical breakthrough comes in the form of drastically reduced memory power consumption and die area at comparable memory densities, delivering tangible efficiency gains for large-scale compute environments.

This innovation marks yet another step in Marvell´s strategy to redefine memory hierarchies in accelerated computing infrastructure. The custom SRAM is part of a broader suite of memory solutions by the company. Marvell recently launched its Compute Express Link (CXL) technology, which is designed for seamless integration into custom silicon, enabling cloud servers to access terabytes of additional memory and supplementary computational capacity. In parallel, the company showcased a custom high-bandwidth memory (HBM) solution that boosts memory capacity by up to 33 percent, optimizing both spatial footprint and energy efficiency for packed silicon environments.

By bringing 2 nm custom SRAM to market, Marvell further entrenches itself as a leader in advanced semiconductor technologies targeting the most demanding enterprise workloads. The company´s commitment to continued innovation in both memory density and power savings reflects growing industry demand for more efficient, high-performance solutions for cloud-scale and Artificial Intelligence-driven applications. These memory advancements position Marvell to address next-generation requirements as data center infrastructure and accelerated compute clusters continue to evolve.

72

Impact Score

Google expands agentic enterprise push

Google used Cloud Next ’26 to position itself as a more integrated enterprise Artificial Intelligence provider, combining models, infrastructure, security, and multicloud data services. The strategy broadens its reach into enterprise software while emphasizing interoperability with rival clouds and platforms.

China still blocking Nvidia H200 chip sales

Nvidia has yet to complete H200 sales into China even after the United States reopened exports. Chinese authorities are reportedly limiting imports as Beijing pushes buyers toward domestic semiconductor suppliers.

OpenAI prepares GPT-5.5 launch

OpenAI is reportedly preparing GPT-5.5, its first fully retrained base model since GPT-4.5, as it pushes harder into enterprise software. The model is expected to bring native multimodal capabilities and stronger support for agent-based workflows.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.