Nvidia and Meta have entered into a multiyear, multigenerational strategic partnership that spans on premises, cloud and artificial intelligence infrastructure. Meta plans to build hyperscale data centers that are optimized for both training and inference as part of the company’s long term artificial intelligence infrastructure roadmap, positioning the company for future large scale workloads.
Under the agreement, the companies will enable the large scale deployment of Nvidia CPUs and millions of Nvidia Blackwell and Rubin GPUs, which are aimed at high performance compute and advanced model training. The partnership also includes the integration of Nvidia Spectrum-X Ethernet switches into Meta’s Facebook Open Switching System platform, tying together compute and networking for more efficient data center operations.
Nvidia founder and CEO Jensen Huang said that no one deploys artificial intelligence at Meta’s scale, citing the integration of frontier research with industrial scale infrastructure to support personalization and recommendation systems for billions of users. He highlighted that through deep codesign across CPUs, GPUs, networking and software, Nvidia is bringing the full Nvidia platform to Meta’s researchers and engineers as they work to build the foundation for the next artificial intelligence frontier.
