d-Matrix announced JetStream, a custom I/O card engineered to deliver data center-scale artificial intelligence inference. The company positioned the product against a backdrop of rapidly expanding consumer use, noting that millions of people now use artificial intelligence services and that the industry focus is shifting from model training to deploying models with ultra-low latency for multiple concurrent users. d-Matrix described JetStream as built from the ground up to meet those latency and scale requirements.
According to the announcement, JetStream is intended to operate alongside d-Matrix Corsair accelerators and d-Matrix Aviator software to support state-of-the-art models exceeding 100B parameters. d-Matrix provided specific performance claims for the combined stack, stating JetStream delivers 10x the speed, 3x better cost performance, and 3x greater energy efficiency compared with gpu-based solutions. The company framed those metrics as benefits for large-scale inference workloads, including agentic models, reasoning tasks, and multi-modal interactive content.
With JetStream added to its product lineup, d-Matrix said it now offers a complete platform that spans compute, software, and networking. The company positioned this integrated approach as relatively rare among infrastructure providers and aimed at customers deploying artificial intelligence services at scale with stringent latency, cost, and efficiency targets. No additional technical specifications, pricing, or availability details were provided in the announcement.