Marvell unveils ultra low power 2 nm dense sram with major efficiency gains

Marvell is detailing performance data for its 2 nm custom sram ip, showing sharp gains in power, area, and bandwidth density over standard dense sram offerings. The company positions the architecture as a strategic edge as logic scaling continues to outpace memory in leading semiconductor nodes.

Marvell used its analyst day 2025 event to spotlight new custom silicon ip, focusing on a 2 nm sram design that it says beats industry standard dense sram on both power and density. The 2 nm sram ip, which was initially launched in June, is now accompanied by detailed performance figures that Marvell presents as clear evidence of its advantages over conventional solutions. The company is targeting use in dense system on chips where memory blocks and their layout strongly influence overall power and area.

In a 256K instance comparison, Marvell reports an 80% reduction in total power consumption, a 37% smaller area, and cycle times that are 22% faster. The company also notes that its memory layout is more rectangular, which is intended to make it easier to integrate into dense SoCs that often need regular, block like macros to optimize floorplans. Marvell frames these improvements as a combined benefit of circuit level changes and physical design choices that allow its sram to slot more cleanly into advanced logic centric designs.

Further comparisons with top alternatives show that Marvell’s custom sram uses 50% less area at the same bandwidth, reduces standby power by 66%, and delivers 17 times more bandwidth per mm² when normalized by area. Marvell attributes these gains to redesigned clocking and port structures that are tuned to extract more bandwidth from on die sram without incurring the typical power penalties. The company argues that this architectural approach yields significantly higher bandwidth density and lower power consumption than standard dense sram ip, and it positions such custom ip as a major advantage at modern semiconductor nodes where logic scaling continues to outpace memory.

58

Impact Score

OpenAI launches Artificial Intelligence deployment consulting unit

OpenAI has created a new consulting and deployment business aimed at helping enterprises build and roll out Artificial Intelligence systems. The move mirrors a similar push by Anthropic and signals a broader effort by model providers to capture more of the enterprise services market.

SK Group warns DRAM shortages could curb memory use

SK Group chairman Chey Tae-won warned that customers may reduce memory consumption through infrastructure and software optimization if DRAM suppliers fail to raise output. Demand from Artificial Intelligence data centers is keeping the market tight as memory makers weigh expansion against the long timelines for new fabs.

BitUnlocker bypasses TPM-only Windows 11 BitLocker

Intrinsec disclosed BitUnlocker, a downgrade attack that can bypass TPM-only Windows 11 BitLocker protections with physical access to a machine. The technique abuses a flaw in Windows recovery and deployment components and relies on older trusted boot code.

Micron samples 256 GB DDR5 9200 MT/s RDIMM server modules

Micron has begun sampling 256 GB DDR5 RDIMM server modules built on its 1-gamma technology to key ecosystem partners. The company positions the new modules as a higher-speed, more power-efficient option for scaling next-generation Artificial Intelligence and HPC infrastructure.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.