d-Matrix announces JetStream I/O accelerators for ultra-low latency artificial intelligence inference

d-Matrix introduced JetStream, a custom I/O card designed to deliver data center-scale artificial intelligence inference with ultra-low latency. The company says JetStream, paired with its Corsair accelerators and Aviator software, can scale to state-of-the-art models and improve speed, cost performance, and energy efficiency versus gpu-based solutions.

d-Matrix announced JetStream, a custom I/O card engineered to deliver data center-scale artificial intelligence inference. The company positioned the product against a backdrop of rapidly expanding consumer use, noting that millions of people now use artificial intelligence services and that the industry focus is shifting from model training to deploying models with ultra-low latency for multiple concurrent users. d-Matrix described JetStream as built from the ground up to meet those latency and scale requirements.

According to the announcement, JetStream is intended to operate alongside d-Matrix Corsair accelerators and d-Matrix Aviator software to support state-of-the-art models exceeding 100B parameters. d-Matrix provided specific performance claims for the combined stack, stating JetStream delivers 10x the speed, 3x better cost performance, and 3x greater energy efficiency compared with gpu-based solutions. The company framed those metrics as benefits for large-scale inference workloads, including agentic models, reasoning tasks, and multi-modal interactive content.

With JetStream added to its product lineup, d-Matrix said it now offers a complete platform that spans compute, software, and networking. The company positioned this integrated approach as relatively rare among infrastructure providers and aimed at customers deploying artificial intelligence services at scale with stringent latency, cost, and efficiency targets. No additional technical specifications, pricing, or availability details were provided in the announcement.

72

Impact Score

Nvidia skips a new GeForce generation as Artificial Intelligence chips dominate

Nvidia is set to go a year without a new GeForce GPU generation for the first time since the 1990s as memory shortages and higher margins in Artificial Intelligence hardware reshape the market. AMD and Intel are also struggling to capitalize because the same supply constraints are hitting gaming products across the industry.

Where gpu debt starts to break

Stress in gpu-backed infrastructure financing is emerging around deals that lack the structural protections seen in the strongest transactions. Oracle, the Abilene Stargate project, and older CoreWeave debt illustrate different ways residual risk can surface when contracts, collateral, and counterparties fall short.

SK hynix starts mass production of 192 GB SOCAMM2

SK hynix has begun mass production of the 192 GB SOCAMM2, a next-generation memory module standard built on 1cnm LPDDR5X low-power DRAM. The module is positioned as a primary memory solution for next-generation Artificial Intelligence servers.

AMD taps GlobalFoundries for co-packaged optics in Instinct MI500

AMD is preparing a renewed manufacturing link with GlobalFoundries to bring co-packaged optics to its Instinct MI500 Artificial Intelligence accelerators. The move is aimed at improving bandwidth and power efficiency in data center systems by moving beyond copper-based interconnects.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.