MatX raises $500m to build high throughput large language model training chip

MatX has secured $500 million in Series B funding to develop a custom processor optimized for large language model training, targeting higher throughput and lower latency than existing chips. The company plans to manufacture the MatX One with TSMC and begin shipping in 2027.

Artificial Intelligence chip startup MatX has raised $500 million in a Series B funding round to accelerate development of its MatX One processor, a chip tailored for large language model workloads. The round was co-led by Situational Awareness, an investment fund founded by former OpenAI researcher Leopold Aschenbrenner, and trading firm Jane Street, with additional backing from Spark Capital, Triatomic Capital, Harpoon, Alchip Technologies, and Marvell. The injection of capital positions MatX to compete in the increasingly crowded market for high performance Artificial Intelligence accelerators.

Founded in 2024 by former Google engineers Reiner Pope and Mike Gunter, MatX is focused on building processors specifically designed to support large language models. At Google, Pope worked on Artificial Intelligence software while Gunter designed the hardware, including chips, that ran those systems, experience the pair now aims to translate into a vertically informed chip design. In a LinkedIn post detailing the funding, Pope said the upcoming MatX One chip will deliver “much higher throughput than any other chip while also achieving the lowest latency,” targeting both raw performance and responsiveness for model training and inference.

The MatX One design is based on a splittable systolic array architecture, which breaks processing elements into smaller arrays to improve efficiency. Pope said the chip will blend the low latency characteristics of SRAM-first designs with the long-context capabilities of HBM, and that “these elements, plus a fresh take on numerics, deliver higher throughput on LLMs than any announced system, while simultaneously matching the latency of SRAM-first designs.” He argued that higher throughput and lower latency will translate into “smarter and faster models for your subscription dollar.” According to TechCrunch, MatX will fabricate the chip at TSMC and plans to start shipping devices in 2027, giving the startup a defined timeline to bring its large language model focused silicon to market.

52

Impact Score

Adaptive training method boosts reasoning large language model efficiency

Researchers have developed an adaptive training system that uses idle processors to train a smaller helper model on the fly, doubling reasoning large language model training speed without sacrificing accuracy. The method aims to cut costs and energy use for advanced applications such as financial forecasting and power grid risk detection.

How to run MiniMax M2.5 locally with Unsloth GGUF

MiniMax-M2.5 is a new open large language model optimized for coding, tool use, search, and office tasks, and Unsloth provides quantized GGUF builds and usage recipes for running it locally. The guide focuses on memory requirements, recommended decoding parameters, and deployment via llama.cpp and llama-server with an OpenAI-compatible interface.

Y Combinator backs new wave of computer vision startups in 2026

Y Combinator’s 2026 computer vision cohort spans infrastructure, developer tools, and industry-specific applications from retail security to aquaculture and healthcare. Startups are increasingly pairing computer vision with large vision language models and foundation models to tackle real-time video, automation, and domain-specific analysis.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.