Zoom expands Artificial Intelligence companion with NVIDIA Nemotron

Zoom is integrating NVIDIA Nemotron into its Artificial Intelligence Companion 3.0, using a federated, hybrid language model approach to route tasks between small, low-latency models and a fine-tuned 49-billion-parameter large language model to improve speed, cost, and quality for enterprises.

Zoom and NVIDIA are partnering to expand Zoom’s Artificial Intelligence Companion by integrating NVIDIA Nemotron open technologies into a federated architecture. The collaboration introduces a next-generation hybrid language model approach that intelligently routes queries between Zoom’s proprietary small language models, optimized for low latency and specific skills, and a fine-tuned large language model for deeper reasoning. Zoom says the framework balances speed, cost, and accuracy and will power AI Companion 3.0 across industries such as finance, healthcare, and government.

The technical stack includes a new 49-billion-parameter large language model built on NVIDIA Nemotron and developed with NVIDIA NeMo tools, alongside other Nemotron-based reasoning models such as the Llama Nemotron Super. Zoom’s federated architecture is patent pending and already used for real-time transcription, translation, and summarization. The partnership leverages NVIDIA GPUs and software to accelerate development, improve lower-cost model decision making, and enhance retrieval-augmented generation capabilities. The company highlights integrations with Microsoft 365, Microsoft Teams, Google Workspace, Slack, Salesforce, and ServiceNow to streamline enterprise workflows.

Zoom frames the work as a responsible Artificial Intelligence foundation for enterprise deployments, emphasizing security and privacy controls. The company states it does not use customer audio, video, chat, screen sharing, attachments, or other communications to train its or third-party models. Zoom and NVIDIA position the collaboration as a way to deliver customizable, private, and scalable AI experiences that improve collaboration and automation while aiming for optimized cost efficiency, quality, and latency for customers and government organizations.

55

Impact Score

Tesla plans terafab for Artificial Intelligence chips

Tesla is moving toward a large-scale chip manufacturing project to support its autonomous driving roadmap. Elon Musk said the terafab effort for Artificial Intelligence chips will launch in seven days and may involve Intel, TSMC and Samsung.

Timeline traces evolution, civilisation and planetary stewardship

A sweeping chronology links cosmology, evolution, human history and modern environmental risk in a single long view of the human condition. The sequence culminates in contemporary debates over climate change, biodiversity loss and artificial intelligence governance.

Wolters Kluwer report tracks Artificial Intelligence shift in legal work

Wolters Kluwer’s 2026 Future Ready Lawyer findings show Artificial Intelligence has become a foundational tool across law firms and corporate legal departments. The survey points to measurable time savings, revenue growth, and rising pressure to strengthen training, ethics, and security.

Anthropic March 2026 release roundup

Anthropic rolled out a broad set of March 2026 updates across Claude Code, the Claude Developer Platform, Claude apps, and enterprise partnerships. Changes focused on larger context windows, workflow improvements, reliability fixes, visual output features, and new partner enablement programs.

China renews push to lead in technology and Artificial Intelligence

China’s 15th five-year plan elevates science and technology as core national priorities, with a strong emphasis on self-reliance and Artificial Intelligence. The blueprint signals heavier investment, broader industrial support, and a more confident bid to shape global technology standards.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.