Intel and SambaNova have entered into a multiyear partnership focused on delivering cost-efficient artificial intelligence inference capabilities as enterprise adoption of artificial intelligence accelerates. The collaboration centers on an Intel-powered artificial intelligence cloud that scales SambaNova’s existing cloud offering on Intel Xeon-based infrastructure to handle large language and multimodal models. The companies aim to provide scalable, production-ready inference for reasoning, code generation, multimodal applications and agentic workflows, positioning the joint platform as a lower cost alternative in a market where inference strategies are still taking shape.
SambaNova said it raised 350 million in Series E financing for manufacturing and cloud capacity expansion with participation from Intel Capital. Intel’s investment will help speed rollout of the Intel-powered artificial intelligence cloud and deepen integration between Intel Xeon processors, Intel GPUs, Intel networking and storage, and SambaNova systems as part of what Intel describes as the next generation of heterogeneous artificial intelligence data centers. Intel’s partnership with SambaNova complements the company’s existing GPU commitments and “does not alter its path forward to competing in AI,” while allowing it to pursue an inference market experts describe as “absolutely up for grabs” compared with model training, which remains dominated by Nvidia.
Analysts say SambaNova brings external expertise and a different approach to artificial intelligence workloads and scaling compared with Intel and frontrunners Nvidia and AMD. Observers highlighted that the move is a partnership and not an acquisition, characterizing it as a lower investment path that lets both sides prove out technology without the distraction of full integration, while keeping the door open to future deals when Intel is stronger. Alongside the Intel deal, SambaNova introduced the SN50 artificial intelligence chip, which the company said runs agentic artificial intelligence at a cost three times lower than traditional GPUs while also performing five times faster than competitor chips, and plans to use its Series E financing to scale and distribute the SN50, which is set to ship to customers later this year. SoftBank Corp. will be the first to deploy SN50 within its artificial intelligence data centers for enterprise and sovereign customers across Asia-Pacific, with the goal of building an artificial intelligence inference fabric for Japan that delivers speed, resiliency and data sovereignty.
