d-Matrix, a pioneer in generative Artificial Intelligence inference for data centers, and Alchip, a high-performance and Artificial Intelligence infrastructure ASIC leader, announced a joint effort to develop the worldu2019s first 3D DRAM-based datacenter inference accelerator. The companies say the collaboration pairs Alchipu2019s ASIC design expertise with d-Matrixu2019s digital in-memory compute platform architecture to attack performance and cost bottlenecks that constrain current Artificial Intelligence infrastructure.
A key technology from the collaboration, d-Matrix 3DIMC, is already featured on d-Matrix Pavehawk test silicon and has been validated in d-Matrixu2019s labs. d-Matrix will commercially debut 3DIMC on the Raptor inference accelerator, which is described as the successor to d-Matrix Corsair. The partners claim the 3D-stacked DRAM design will be capable of delivering up to 10 times faster inference than HBM4-based solutions, positioning it as a high-performance option for demanding workloads.
The announcement frames the new solution as targeted at generative and agentic Artificial Intelligence workloads, where inference speed and cost efficiency are critical. By integrating in-memory compute with 3D-stacked DRAM and custom ASIC design, d-Matrix and Alchip aim to reduce the latency and expense associated with current memory and accelerator stacks. The collaboration highlights a path toward datacenter accelerators that prioritize memory architecture alongside ASIC design to improve inference throughput and total cost of ownership.
