MatX raises $500m to build high throughput large language model training chip

MatX has secured $500 million in Series B funding to develop a custom processor optimized for large language model training, targeting higher throughput and lower latency than existing chips. The company plans to manufacture the MatX One with TSMC and begin shipping in 2027.

Artificial Intelligence chip startup MatX has raised $500 million in a Series B funding round to accelerate development of its MatX One processor, a chip tailored for large language model workloads. The round was co-led by Situational Awareness, an investment fund founded by former OpenAI researcher Leopold Aschenbrenner, and trading firm Jane Street, with additional backing from Spark Capital, Triatomic Capital, Harpoon, Alchip Technologies, and Marvell. The injection of capital positions MatX to compete in the increasingly crowded market for high performance Artificial Intelligence accelerators.

Founded in 2024 by former Google engineers Reiner Pope and Mike Gunter, MatX is focused on building processors specifically designed to support large language models. At Google, Pope worked on Artificial Intelligence software while Gunter designed the hardware, including chips, that ran those systems, experience the pair now aims to translate into a vertically informed chip design. In a LinkedIn post detailing the funding, Pope said the upcoming MatX One chip will deliver “much higher throughput than any other chip while also achieving the lowest latency,” targeting both raw performance and responsiveness for model training and inference.

The MatX One design is based on a splittable systolic array architecture, which breaks processing elements into smaller arrays to improve efficiency. Pope said the chip will blend the low latency characteristics of SRAM-first designs with the long-context capabilities of HBM, and that “these elements, plus a fresh take on numerics, deliver higher throughput on LLMs than any announced system, while simultaneously matching the latency of SRAM-first designs.” He argued that higher throughput and lower latency will translate into “smarter and faster models for your subscription dollar.” According to TechCrunch, MatX will fabricate the chip at TSMC and plans to start shipping devices in 2027, giving the startup a defined timeline to bring its large language model focused silicon to market.

52

Impact Score

Memory architecture is central to autonomous llm agents

Memory design, not just model choice, determines whether autonomous agents can sustain context, learn from experience, and stay reliable over time. A practical framework centers on how information is written, managed, and read across multiple memory types.

OpenAI expands cyber model access through trusted program

OpenAI has introduced GPT-5.4-Cyber as a restricted model for cybersecurity professionals, widening access through its Trusted Access for Cyber program. The release highlights both the defensive value and misuse risks of more capable Artificial Intelligence tools in security work.

Chinese tech firms and Li Fei-Fei push world models forward

Chinese tech companies and Li Fei-Fei’s World Labs are accelerating work on world models, a field focused on helping Artificial Intelligence learn from and interact with physical reality. Alibaba’s new Happy Oyster system targets real-time virtual world creation with more continuous user control.

UK launches Sovereign Artificial Intelligence backing for startups

The UK government has unveiled Sovereign Artificial Intelligence, a state-backed initiative aimed at helping domestic startups build, scale and stay in Britain. The first support includes an equity investment in Callosum and supercomputing access for 6 additional companies working across drug discovery, infrastructure and national security.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.