NVIDIA and Google Expand Artificial Intelligence Collaboration With Blackwell and Gemini Integration

NVIDIA and Google deepen their partnership, bringing next-generation Artificial Intelligence models and infrastructure to developers and enterprises worldwide.

NVIDIA and Google have reaffirmed their commitment to advancing Artificial Intelligence through a deepened engineering partnership, focusing on optimizing the entire computing stack for streamlined development and deployment. This ongoing collaboration is reflected in significant joint efforts to enhance popular open-source software such as JAX, OpenXLA, MaxText, and llm-d. These optimizations directly support Google’s Gemini models and the open Gemma family, ensuring that innovative Artificial Intelligence frameworks reach both cloud-based and on-premises deployment scenarios.

The joint advancements extend to infrastructure, with Google Cloud being the first provider to offer NVIDIA’s new Blackwell-powered platforms—the HGX B200 and GB200 NVL72—via A4 and A4X VMs. These solutions, integrated into Google Cloud’s managed services like Vertex Artificial Intelligence and Google Kubernetes Engine, provide organizations with scalable, high-performance environments for training and serving complex agentic Artificial Intelligence workloads. The A4X VMs also feature seamless scaling capabilities with Google’s advanced Jupiter network and NVIDIA’s ConnectX-7 network interface cards, further bolstered by cutting-edge liquid cooling technologies to deliver efficient, sustainable large-scale computation.

Importantly, the collaboration addresses enterprise needs for data residency, security, and compliance. The latest partnership milestone allows customers to deploy Gemini models on-premises through Google Distributed Cloud with NVIDIA Blackwell technology, making advanced Artificial Intelligence accessible to sectors such as healthcare and finance with stringent privacy requirements. This confidential computing capability enables organizations to retain control over their data and continue innovating within regulatory boundaries.

Performance optimizations have been vital, with enhancements for the Gemini and Gemma models utilizing technologies like NVIDIA TensorRT-LLM and NVIDIA NIM microservices. These improvements maximize inference efficiency across a variety of deployment architectures, from cloud data centers to local NVIDIA RTX-powered PCs, making high-level Artificial Intelligence more accessible. Parallel to these technical strides, NVIDIA and Google Cloud are cultivating a robust developer ecosystem by optimizing open frameworks and launching a dedicated joint developer community. This ecosystem empowers developers to build and scale innovative Artificial Intelligence solutions, further bolstered by open-source leadership and cross-organizational knowledge sharing.

76

Impact Score

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.

Please check your email for a Verification Code sent to . Didn't get a code? Click here to resend