Zoom expands Artificial Intelligence companion with NVIDIA Nemotron

Zoom is integrating NVIDIA Nemotron into its Artificial Intelligence Companion 3.0, using a federated, hybrid language model approach to route tasks between small, low-latency models and a fine-tuned 49-billion-parameter large language model to improve speed, cost, and quality for enterprises.

Zoom and NVIDIA are partnering to expand Zoom’s Artificial Intelligence Companion by integrating NVIDIA Nemotron open technologies into a federated architecture. The collaboration introduces a next-generation hybrid language model approach that intelligently routes queries between Zoom’s proprietary small language models, optimized for low latency and specific skills, and a fine-tuned large language model for deeper reasoning. Zoom says the framework balances speed, cost, and accuracy and will power AI Companion 3.0 across industries such as finance, healthcare, and government.

The technical stack includes a new 49-billion-parameter large language model built on NVIDIA Nemotron and developed with NVIDIA NeMo tools, alongside other Nemotron-based reasoning models such as the Llama Nemotron Super. Zoom’s federated architecture is patent pending and already used for real-time transcription, translation, and summarization. The partnership leverages NVIDIA GPUs and software to accelerate development, improve lower-cost model decision making, and enhance retrieval-augmented generation capabilities. The company highlights integrations with Microsoft 365, Microsoft Teams, Google Workspace, Slack, Salesforce, and ServiceNow to streamline enterprise workflows.

Zoom frames the work as a responsible Artificial Intelligence foundation for enterprise deployments, emphasizing security and privacy controls. The company states it does not use customer audio, video, chat, screen sharing, attachments, or other communications to train its or third-party models. Zoom and NVIDIA position the collaboration as a way to deliver customizable, private, and scalable AI experiences that improve collaboration and automation while aiming for optimized cost efficiency, quality, and latency for customers and government organizations.

55

Impact Score

Why Nvidia’s value is so high: market cap and future growth

Nvidia’s market capitalization reflects its leadership in GPUs for Artificial Intelligence and data centers, reinforced by a growing software ecosystem and strong investor expectations. The article outlines the technical and market drivers behind that valuation and notes risks such as competition and market volatility.

Artificial intelligence transforming the insurance industry

Emmanuèle Lutfalla and Louis Fer examine how artificial intelligence is reshaping insurers’ operations, delivering efficiency and new products while creating legal, ethical and regulatory risks under the EU artificial intelligence framework.

ASUS launches GB721-E2 rack with NVIDIA GB300 NVL72 for Artificial Intelligence

ASUS introduced the XA GB721-E2, a rack-scale system built on the NVIDIA GB300 NVL72 designed for large-scale model training and high-throughput inference. The system pairs high-density NVIDIA Grace CPUs and Blackwell Ultra GPUs with liquid cooling and networking for enterprise Artificial Intelligence and HPC deployments.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.