SambaNova Systems Inc. introduced its most advanced Artificial Intelligence processor, the SN50, alongside closing a $350 million late-stage Series E funding round led by Vista Equity Partners, Cambium Capital and others. The round saw strong participation from Intel Capital and additional investors including Assam Ventures, Battery Ventures, Gulf Development Public Company Limited,, Mayfield Capital, QIA, Saudi First Data, Seligman Ventures, T. Rowe Price, &E, 8Square, Atlantic Bridge, BlackRock, GV, Nepenthe, Nuri Capital and Redline Capital. The company positions itself as a rival to Nvidia, offering high-performance chips for Artificial Intelligence model training and inference that can be accessed via the cloud or deployed on-premises, with a particular emphasis on power efficiency and tokens generated per kilowatt hour.
The SN50, described as a reconfigurable dataflow unit, is a specialized Artificial Intelligence accelerator that differs from Nvidia’s graphics processing units and is more akin to tensor processing units and Trainium chips. SambaNova said the SN50 delivers five times more compute and four times greater networking bandwidth than its previous-generation SN40 chipset, and customers will be able to link up to 256 accelerators over a multi-terabit-per-second interconnect to support larger and longer-context Artificial Intelligence models with higher throughput and responsiveness without escalating compute costs. The chip uses a three-tier memory architecture that can support Artificial Intelligence models with up to 10 trillion parameters and 10 million context lengths, with resident multimodel memory and agentic caching designed to lower cost per token and improve power efficiency. Target applications include real-time Artificial Intelligence voice assistants that require ultra-low latency and can be scaled to thousands of simultaneous sessions.
To extend its reach, SambaNova is deepening its collaboration with Intel Corp., which is investing to accelerate a new Intel-powered Artificial Intelligence cloud based on the SambaNova Cloud platform that will use Intel Xeon central processing units to create infrastructure optimized for multimodal large language models. Intel’s Xeon CPUs will handle general-purpose and system management tasks, while the SN50 will focus on rapid data processing and complex calculations, with the combined cloud intended to improve latency, throughput and overall Artificial Intelligence workload performance. Intel plans to support SambaNova’s cloud expansion with reference architectures, deployment blueprints and its software and integrator ecosystem, and the two companies will co-market and co-sell the platform as a GPU alternative. The partnership gives SambaNova access to Intel’s global reach and manufacturing and offers Intel a renewed path into an Artificial Intelligence chip market where it has lagged Nvidia and Advanced Micro Devices Inc., with analysts noting that enterprises will ultimately be swayed by performance rather than vendor loyalty, even as reports have suggested Intel previously considered a SambaNova acquisition in the region of $1.6 billion.
