d-Matrix announces JetStream I/O accelerators for ultra-low latency artificial intelligence inference

d-Matrix introduced JetStream, a custom I/O card designed to deliver data center-scale artificial intelligence inference with ultra-low latency. The company says JetStream, paired with its Corsair accelerators and Aviator software, can scale to state-of-the-art models and improve speed, cost performance, and energy efficiency versus gpu-based solutions.

d-Matrix announced JetStream, a custom I/O card engineered to deliver data center-scale artificial intelligence inference. The company positioned the product against a backdrop of rapidly expanding consumer use, noting that millions of people now use artificial intelligence services and that the industry focus is shifting from model training to deploying models with ultra-low latency for multiple concurrent users. d-Matrix described JetStream as built from the ground up to meet those latency and scale requirements.

According to the announcement, JetStream is intended to operate alongside d-Matrix Corsair accelerators and d-Matrix Aviator software to support state-of-the-art models exceeding 100B parameters. d-Matrix provided specific performance claims for the combined stack, stating JetStream delivers 10x the speed, 3x better cost performance, and 3x greater energy efficiency compared with gpu-based solutions. The company framed those metrics as benefits for large-scale inference workloads, including agentic models, reasoning tasks, and multi-modal interactive content.

With JetStream added to its product lineup, d-Matrix said it now offers a complete platform that spans compute, software, and networking. The company positioned this integrated approach as relatively rare among infrastructure providers and aimed at customers deploying artificial intelligence services at scale with stringent latency, cost, and efficiency targets. No additional technical specifications, pricing, or availability details were provided in the announcement.

72

Impact Score

Sneha Goenka’s ultra-fast sequencing cuts genetic diagnosis to hours

Sneha Goenka, MIT Technology Review’s 2025 Innovator of the Year and an assistant professor at Princeton, helped build a sequencing pipeline that reduces genetic diagnosis from weeks to hours. Her work pairs cloud computing architectures with real-time streaming analysis to accelerate clinical genomics.

Why basic science deserves our boldest investment

The transistor story shows how curiosity-driven basic science, supported by long-term funding, enabled the information age and today´s Artificial Intelligence technologies. Current federal and university funding pressures risk undermining the next wave of breakthroughs.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.