CoreWeave Deploys Thousands of NVIDIA Grace Blackwell GPUs for Leading Cloud Clients

CoreWeave launches widespread access to NVIDIA Grace Blackwell GPUs, powering real-world Artificial Intelligence innovation for clients like Cohere, IBM, and Mistral AI.

CoreWeave announced it has become one of the first cloud service providers to deploy NVIDIA GB200 NVL72 systems at scale, making thousands of NVIDIA Grace Blackwell GPUs available to its customers. Major industry players such as Cohere, IBM, and Mistral AI are already leveraging these advanced systems to train, deploy, and optimize next-generation Artificial Intelligence models and applications, signaling a new era in accelerated cloud computing for machine learning workloads.

The NVIDIA GB200 NVL72 platform is designed for high-performance artificial reasoning and large-scale Artificial Intelligence agent workloads. CoreWeave, which holds the distinction of being the first cloud vendor to offer Grace Blackwell GPUs for general availability, reports strong results from MLPerf benchmark tests with the new system. This rapid adoption enables developers and enterprises to handle more complex model architectures and inference demands with increased efficiency and computing power.

Mike Intrator, CEO of CoreWeave, emphasized the strategic collaboration with NVIDIA, highlighting the company’s commitment to delivering the latest innovations in accelerated computing to a broad spectrum of clients. The rollout allows Artificial Intelligence pioneers early access to the performance gains inherent in the Grace Blackwell platform, setting the stage for further advancements in data-driven applications and Artificial Intelligence production pipelines. With this deployment, CoreWeave strengthens its offering for enterprises seeking scalable, high-throughput cloud infrastructure tailored for cutting-edge Artificial Intelligence research and product development.

75

Impact Score

Samsung starts sampling 3 GB GDDR7 running at 36 Gbps

Samsung has begun sampling its fastest-ever GDDR7 memory at 36 Gbps in 24 Gb dies that translate to 3 GB per chip, and it is also mass producing 28.0 Gbps 3 GB modules reportedly aimed at a mid-cycle NVIDIA refresh.

FLUX.2 image generation models now released, optimized for NVIDIA RTX GPUs

Black Forest Labs, the frontier Artificial Intelligence research lab, released the FLUX.2 family of visual generative models with new multi-reference and pose control tools and direct ComfyUI support. NVIDIA collaboration brings FP8 quantizations that reduce VRAM requirements by 40% and improve performance by 40%.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.