Intel and SambaNova Systems have entered a “multiyear strategic collaboration” to deliver “cloud-scale AI inference,” shortly after acquisition talks between the two companies stalled. SambaNova announced the partnership alongside a 350 million Series E funding round that received “strong participation” from Intel Capital and was led by Vista Equity Partners and Cambium Capital, and at the same time unveiled its next-generation SN50 chip, which it claims can outperform rival products. A SambaNova spokesperson said an acquisition deal is “not in discussion at this stage,” while an Intel spokesperson declined to comment on the previously reported talks.
The San Jose-based startup is positioning the SN50 as the “most efficient chip for agentic AI,” stating that the chip, which is set to ship later this year, is up to five times faster than competitive chips and can run agentic AI workloads at three times lower costs than GPUs. Intel’s data center leadership framed the alliance as complementary to its broader artificial intelligence infrastructure strategy across Xeon CPUs and GPUs, noting that “Customers are asking for more choice and more efficient ways to scale AI” and that combining Intel compute, networking and memory with SambaNova’s full-stack systems provides a GPU alternative for deploying advanced artificial intelligence at scale. SambaNova said the funding and Intel partnership, which it calls a multibillion-dollar market opportunity, will support the SN50’s production ramp and distribution.
The multiyear collaboration is focused on delivering “high-performance, cost-efficient AI inference solutions for AI-native companies, model providers, enterprises and government organizations around the world,” centered on expanding SambaNova’s vertically integrated artificial intelligence cloud platform using Intel Xeon CPUs. This expansion will be “supported by reference architectures, deployment blueprints and partnerships with systems integrators and software vendors,” with the companies planning co-selling and co-marketing activities and Intel expected to leverage its “global enterprise, cloud and partner channels to accelerate adoption across the AI ecosystem.” SambaNova said the SN50 uses its Reconfigurable Data Unit architecture to provide “ultra-low latency” and power “thousands of simultaneous AI sessions with consistent high performance,” and that its three-tier memory design using SRAM, HBM and DDR offers “breakthrough model capacity” enabling models with more than 10 trillion parameters and over 10 million context lengths, which is optimized through “resident multi-model memory and agentic caching” to cut infrastructure costs for enterprise-scale artificial intelligence deployments. The first SN50 customer is SoftBank Group, which plans to integrate the chip into next-generation artificial intelligence data centers in Japan, and SambaNova said its performance and cost claims are based on internal benchmarking against widely deployed, current-generation GPU systems running large language models.
