NVIDIA and Microsoft integrate latest technologies to power artificial intelligence superfactories

NVIDIA and Microsoft expanded their collaboration at Microsoft Ignite to equip Microsoft’s Fairwater artificial intelligence superfactory with Spectrum-X switches and Blackwell GPUs. The work also includes new Azure VM previews, Nemotron integrations for SQL Server 2025 and tooling to onboard enterprise artificial intelligence agents into Microsoft 365.

Timed with Microsoft Ignite, NVIDIA and Microsoft announced an expanded collaboration that bolts next-generation networking and accelerated compute into Microsoft’s artificial intelligence superfactory infrastructure. Microsoft will deploy NVIDIA Spectrum-X Ethernet switches to connect the Fairwater data center in Wisconsin with a new facility in Atlanta, and integrate hundreds of thousands of NVIDIA Blackwell GPUs for large-scale training. Microsoft is also deploying more than 100,000 Blackwell Ultra GPUs in NVIDIA GB300 NVL72 systems globally to accelerate inference workloads.

The partnership delivers new cloud and edge offerings. Azure now has public preview NCv6 Series VMs powered by NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs and extends the Blackwell platform into Azure Local for low-latency, sovereign artificial intelligence use cases. The collaboration brings Nemotron model integrations to accelerate Microsoft SQL Server 2025, Microsoft 365 Copilot enhancements, and Microsoft Foundry availability for NVIDIA Nemotron and Cosmos models. NVIDIA NeMo Agent Toolkit integration with Microsoft Agent 365 will enable enterprise-ready artificial intelligence agents inside Outlook, Teams, Word and SharePoint, and organizations can deploy GPU-optimized NIM microservices where data resides.

Software co-engineering aims to create a fungible fleet that can flexibly accelerate diverse workloads. Continuous full-stack optimizations across NVIDIA Blackwell and Hopper architectures are said to improve throughput and efficiency for generative artificial intelligence, vector search, databases, digital twins and simulation. NVIDIA TensorRT-LLM and the DGX Cloud Benchmarking suite are cited as contributors to reduced latency and cost, with the companies noting a more than 90 percent drop in end-user GPT model pricing on Azure over two years and Microsoft achieving 95 percent of reference architecture performance for H100 training.

The collaboration also targets cybersecurity and industrial digitalization. Joint research on adversarial learning using the NVIDIA Dynamo-Triton framework and TensorRT promises large performance gains versus CPU methods. NVIDIA Omniverse libraries, Isaac Sim and standardized OpenUSD bindings on Azure support digital twins, robotics workflows and physical artificial intelligence, with partners such as Hexagon and Wandelbots building robotics solutions on the jointly optimized stack.

70

Impact Score

Google unveils Gemini 3 with generative interfaces and agent

Google introduced Gemini 3, a multimodal Artificial Intelligence upgrade that creates visual, interactive outputs and an experimental agent to manage multi-step tasks. The model ties deeper into search, shopping, and a new single-prompt development platform.

Gemini 3: Google DeepMind’s most intelligent Artificial Intelligence model

Gemini 3 is Google DeepMind’s most intelligent Artificial Intelligence model, combining advanced reasoning, native multimodality, and long-context understanding to help people learn, build, and plan. It is available via Gemini, Google AI Studio, the Gemini API, and integrations with developer platforms.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.