d-Matrix announces JetStream I/O accelerators for ultra-low latency artificial intelligence inference

d-Matrix introduced JetStream, a custom I/O card designed to deliver data center-scale artificial intelligence inference with ultra-low latency. The company says JetStream, paired with its Corsair accelerators and Aviator software, can scale to state-of-the-art models and improve speed, cost performance, and energy efficiency versus gpu-based solutions.

d-Matrix announced JetStream, a custom I/O card engineered to deliver data center-scale artificial intelligence inference. The company positioned the product against a backdrop of rapidly expanding consumer use, noting that millions of people now use artificial intelligence services and that the industry focus is shifting from model training to deploying models with ultra-low latency for multiple concurrent users. d-Matrix described JetStream as built from the ground up to meet those latency and scale requirements.

According to the announcement, JetStream is intended to operate alongside d-Matrix Corsair accelerators and d-Matrix Aviator software to support state-of-the-art models exceeding 100B parameters. d-Matrix provided specific performance claims for the combined stack, stating JetStream delivers 10x the speed, 3x better cost performance, and 3x greater energy efficiency compared with gpu-based solutions. The company framed those metrics as benefits for large-scale inference workloads, including agentic models, reasoning tasks, and multi-modal interactive content.

With JetStream added to its product lineup, d-Matrix said it now offers a complete platform that spans compute, software, and networking. The company positioned this integrated approach as relatively rare among infrastructure providers and aimed at customers deploying artificial intelligence services at scale with stringent latency, cost, and efficiency targets. No additional technical specifications, pricing, or availability details were provided in the announcement.

72

Impact Score

Intel Fab 52 outscales TSMC Arizona in advanced wafer production

Intel Fab 52 in Arizona is producing more than 40,000 wafers per month on its 18A node, outpacing TSMC’s current Arizona output on older process technologies. The facility highlights Intel’s focus on advanced manufacturing for its own products while TSMC keeps its leading nodes primarily in Taiwan.

Intel details packaging for 16 compute dies and 24 HBM5 modules

Intel Foundry has outlined an advanced packaging approach that combines Foveros 3D and EMIB-T interconnect to scale silicon beyond conventional reticle limits, targeting configurations with 16 compute dies and 24 HBM5 memory modules in one package. The design is built around upcoming 18A and 14A process nodes and aims to support current and future high bandwidth memory standards.

Four bright spots in climate news in 2025

Despite record emissions and worsening climate disasters in 2025, several developments in China’s energy transition, grid-scale batteries, Artificial Intelligence driven investment, and global warming projections offered genuine reasons for cautious optimism.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.