d-Matrix Unveils JetStream I/O for Ultra-Fast AI Inference at Scale

Paired with d-Matrix’s Corsair accelerators and Aviator software program, JetStream can ship as much as 10x speedups, 3x higher cost-performance, and 3x increased power effectivity than GPU-based choices
d-Matrix® at present introduced the enlargement of its AI product portfolio with d-Matrix JetStream, a customized I/O card designed from the bottom as much as ship industry-leading, information heart–scale AI inference.
With thousands and thousands of individuals now utilizing AI providers – and the rise of agentic AI, reasoning, and multi-modal interactive content material – the {industry}’s focus is rapidly shifting from mannequin coaching to deploying AI at ultra-low latency throughout a number of customers.
When mixed with d-Matrix Corsair accelerators and d-Matrix Aviator
software program, JetStream I/O accelerators can scale to state-of-the-art fashions exceeding 100B parameters, delivering 10x the velocity, 3x higher price efficiency, and 3x better power effectivity than GPU-based options.1
JetStream’s addition to the corporate’s product portfolio makes d-Matrix one of many few AI infrastructure suppliers providing a whole platform that spans compute, software program, and networking.
“JetStream networking comes at a time when AI goes multimodal, and customers are demanding hyper-fast ranges of interactivity,” stated Sid Sheth, d-Matrix co-founder and CEO. “Through JetStream, along with our already-announced Corsair compute accelerator platform, d-Matrix is offering a path ahead that makes AI each scalable and blazing quick.”
JetStream is a clear NIC and streaming answer optimized for d-Matrix Corsair accelerators. Packaged in an industry-standard PCIe format and appropriate with off-the-shelf Ethernet switches, JetStream NICs are simple to deploy inside present information facilities – eliminating the necessity for expensive infrastructure alternative.
Availability
JetStream NICs are full-height PCIe Gen5 playing cards delivering a most 400Gpbs bandwidth. Samples can be found now, with full manufacturing anticipated by year-end.
The put up d-Matrix Unveils JetStream I/O for Ultra-Fast AI Inference at Scale first appeared on AI-Tech Park.