Artificial Intelligence chip startup MatX has raised $500 million in a Series B funding round to accelerate development of its MatX One processor, a chip tailored for large language model workloads. The round was co-led by Situational Awareness, an investment fund founded by former OpenAI researcher Leopold Aschenbrenner, and trading firm Jane Street, with additional backing from Spark Capital, Triatomic Capital, Harpoon, Alchip Technologies, and Marvell. The injection of capital positions MatX to compete in the increasingly crowded market for high performance Artificial Intelligence accelerators.
Founded in 2024 by former Google engineers Reiner Pope and Mike Gunter, MatX is focused on building processors specifically designed to support large language models. At Google, Pope worked on Artificial Intelligence software while Gunter designed the hardware, including chips, that ran those systems, experience the pair now aims to translate into a vertically informed chip design. In a LinkedIn post detailing the funding, Pope said the upcoming MatX One chip will deliver “much higher throughput than any other chip while also achieving the lowest latency,” targeting both raw performance and responsiveness for model training and inference.
The MatX One design is based on a splittable systolic array architecture, which breaks processing elements into smaller arrays to improve efficiency. Pope said the chip will blend the low latency characteristics of SRAM-first designs with the long-context capabilities of HBM, and that “these elements, plus a fresh take on numerics, deliver higher throughput on LLMs than any announced system, while simultaneously matching the latency of SRAM-first designs.” He argued that higher throughput and lower latency will translate into “smarter and faster models for your subscription dollar.” According to TechCrunch, MatX will fabricate the chip at TSMC and plans to start shipping devices in 2027, giving the startup a defined timeline to bring its large language model focused silicon to market.
