CoreWeave announced it has become one of the first cloud service providers to deploy NVIDIA GB200 NVL72 systems at scale, making thousands of NVIDIA Grace Blackwell GPUs available to its customers. Major industry players such as Cohere, IBM, and Mistral AI are already leveraging these advanced systems to train, deploy, and optimize next-generation Artificial Intelligence models and applications, signaling a new era in accelerated cloud computing for machine learning workloads.
The NVIDIA GB200 NVL72 platform is designed for high-performance artificial reasoning and large-scale Artificial Intelligence agent workloads. CoreWeave, which holds the distinction of being the first cloud vendor to offer Grace Blackwell GPUs for general availability, reports strong results from MLPerf benchmark tests with the new system. This rapid adoption enables developers and enterprises to handle more complex model architectures and inference demands with increased efficiency and computing power.
Mike Intrator, CEO of CoreWeave, emphasized the strategic collaboration with NVIDIA, highlighting the company’s commitment to delivering the latest innovations in accelerated computing to a broad spectrum of clients. The rollout allows Artificial Intelligence pioneers early access to the performance gains inherent in the Grace Blackwell platform, setting the stage for further advancements in data-driven applications and Artificial Intelligence production pipelines. With this deployment, CoreWeave strengthens its offering for enterprises seeking scalable, high-throughput cloud infrastructure tailored for cutting-edge Artificial Intelligence research and product development.