3D DRAM accelerator promises major artificial intelligence inference gains

d-Matrix and Alchip have announced a collaboration to build what they call the world’s first 3D DRAM-based datacenter inference accelerator, aiming to overcome memory bandwidth and cost limits in current Artificial intelligence infrastructure.

d-Matrix and Alchip have announced a joint effort to develop what the companies describe as the world’s first 3D DRAM-based datacenter inference accelerator. The collaboration pairs Alchip’s ASIC design experience with d-Matrix’s digital in-memory compute platform to address the performance, cost, and scalability constraints facing today’s Artificial intelligence inference deployments. The partners say the joint engineering approach combines compute-memory integration with advanced ASIC capabilities to increase throughput and energy efficiency for inference workloads.

At the heart of the project is d-Matrix’s 3DIMC, a 3D-stacked DRAM implementation designed to break traditional memory bandwidth bottlenecks. d-Matrix reports that 3DIMC has been validated on Pavehawk test silicon in its labs and claims up to 10× faster inference compared with solutions built around HBM4. The first commercial product to use the technology will be the d-Matrix Raptor inference accelerator, positioned as the successor to the company’s Corsair platform. Raptor is aimed at generative and agentic artificial intelligence workloads and other compute-intensive inference tasks that increasingly need specialized silicon.

The companies frame the extension of compute-memory integration into 3D DRAM as a logical next step for hyperscalers and enterprises coping with growing inference demand. For hardware architects, chip designers, and systems engineers, the key promises are improved cost efficiency, higher inference throughput, and better energy characteristics versus HBM4-based approaches. d-Matrix emphasizes that the collaboration seeks to make artificial intelligence inference faster, more cost-effective, and more sustainable at scale, while industry observers will monitor how 3D DRAM architectures perform in practice against advanced high-bandwidth memory solutions.

65

Impact Score

Gemini 3: Google DeepMind’s most intelligent Artificial Intelligence model

Gemini 3 is Google DeepMind’s most intelligent Artificial Intelligence model, combining advanced reasoning, native multimodality, and long-context understanding to help people learn, build, and plan. It is available via Gemini, Google AI Studio, the Gemini API, and integrations with developer platforms.

Gemini 3 Pro becomes Amp’s default smart agent

The Artificial Intelligence model Gemini 3 Pro is now the primary model powering Amp’s smart agent mode, replacing Claude as the main in-product model. The update brings measurable performance gains and a raft of product features and mode changes across Amp.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.