Nvidia and Google Cloud expand enterprise artificial intelligence with G4 VMs and Omniverse

Google Cloud made G4 virtual machines generally available, powered by Nvidia RTX Pro 6000 Blackwell Server Edition GPUs, and listed Omniverse and Isaac Sim images on its marketplace. The releases target visual computing, agentic and physical artificial intelligence, and industrial digitalization.

Nvidia and Google Cloud are broadening access to accelerated computing across enterprise workloads, announcing the general availability of G4 virtual machines powered by RTX Pro 6000 Blackwell Server Edition GPUs. Nvidia Omniverse and Nvidia Isaac Sim are also now offered as virtual machine images on the Google Cloud Marketplace, aimed at enabling physical and agentic artificial intelligence as well as advanced visual computing in industries such as manufacturing, automotive and logistics. Early adopters include WPP, which is generating photorealistic 3D advertising environments at scale, and Altair, which is accelerating simulation and fluid dynamics workloads.

The G4 VM centers on the Blackwell-based RTX Pro 6000, combining fifth-generation Tensor Cores that introduce FP4 support for faster performance with lower memory use and fourth-generation RT Cores that deliver more than 2x real-time ray-tracing performance over the prior generation. On Google Cloud, configurations scale to eight GPUs per instance with a total of 768 GB of GDDR7 memory and high-throughput local and network storage. As part of Google Cloud’s Artificial Intelligence Hypercomputer architecture, G4 VMs integrate with Google Kubernetes Engine and Vertex AI for streamlined machine learning operations and can accelerate large-scale analytics on Apache Spark and Hadoop via Dataproc. They also support popular third-party design and graphics tools including Autodesk AutoCAD, Blender and Dassault SolidWorks.

With Omniverse available as a VMI, customers can build industrial digitalization applications on Universal Scene Description (OpenUSD). Enterprises can create and operate digital twins of factories and products using the Nvidia Cosmos world foundation model platform and Omniverse Blueprints, enabling physically accurate, real-time simulations for optimization. Isaac Sim, built on Omniverse, supports training, simulation and validation of artificial intelligence-driven robots in physics-based virtual environments prior to deployment.

Beyond Omniverse, Google Cloud customers can deploy Nvidia’s broader software stack. For agentic artificial intelligence, the Nemotron family of open reasoning models and Nvidia Blueprints help teams build sophisticated agents, while Nvidia NIM microservices provide optimized, secure inference. Scientific and high-performance computing workloads can leverage CUDA-X libraries and microservices, with core genomics sequence alignment algorithms on the RTX Pro 6000 Blackwell GPU seeing up to 6.8x higher throughput than the previous generation. Design workflows are supported through RTX Virtual Workstation software, delivering high-performance virtual workstations from G4 VMs to any device.

The announcements position a unified, end-to-end platform built on the Nvidia Blackwell roadmap that spans from GB200 NVL72 (A4X VMs) and HGX B200 (A4 VMs) for massive-scale artificial intelligence training and inference to RTX Pro 6000 Blackwell on G4 VMs for inference and visual computing. The consistent architecture aims to accelerate multistage pipelines, from data analytics to physical artificial intelligence, within a single cloud ecosystem.

58

Impact Score

Federal safety net unprepared for Artificial Intelligence job losses

Economists are warning that the federal system designed to support displaced workers is not equipped for a wave of job losses tied to Artificial Intelligence. Existing unemployment benefits and retraining programs are widely seen as too limited to manage broad disruption.

Chrome downloads Gemini Nano model locally without clear consent

Google Chrome is reported to download a 4 GB Gemini Nano model onto some PCs automatically when certain Artificial Intelligence features are active. The process happens without clear notice in browser settings and can repeat after the model is deleted.

AMD plans specialized EPYC CPUs for Artificial Intelligence, hpc, and cloud

AMD is preparing a broader EPYC strategy with task-specific server CPUs aimed at agentic Artificial Intelligence, hpc, training and inference, and cloud deployments. The shift starts with the Zen 6 generation and adds Verano as an Artificial Intelligence-focused variant within the same EPYC family.

Nvidia expands spectrum-x ethernet with open mrc protocol

Nvidia is positioning Spectrum-X Ethernet as a foundation for large-scale Artificial Intelligence training, with Multipath Reliable Connection adding open, multi-path RDMA transport for higher resilience and throughput. OpenAI, Microsoft and Oracle are among the organizations using the technology in large Artificial Intelligence environments.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.