NVIDIA partners with Mistral Artificial Intelligence to accelerate new family of open models

Mistral Artificial Intelligence and NVIDIA have released the mistral 3 family of open-source multilingual, multimodal models optimized for NVIDIA supercomputing and edge platforms. The lineup includes Mistral Large 3, a mixture-of-experts model with 41B active parameters, 675B total parameters and a 256K context window.

Mistral Artificial Intelligence today announced the mistral 3 family of open-source multilingual, multimodal models, optimized for deployment across NVIDIA supercomputing and edge platforms. The flagship Mistral Large 3 uses a mixture-of-experts architecture that activates only the most relevant parts of the model per token to improve efficiency and scalability. The company says the family will be available everywhere, from the cloud to the data center to the edge, starting Tuesday, Dec. 2.

Mistral Large 3 is described with 41B active parameters, 675B total parameters and a large 256K context window to support enterprise artificial intelligence workloads. NVIDIA hardware and software optimizations are a central part of the announcement. By combining GB200 NVL72 systems with the model’s MoE design and tapping NVIDIA NVLink’s coherent memory domain and wide expert parallelism optimizations, the partners report material performance and cost benefits. The model also benefits from low-precision NVFP4 and NVIDIA Dynamo disaggregated inference optimizations. On the GB200 NVL72, Mistral Large 3 achieved 10x performance gain compared with the prior-generation NVIDIA H200, which the companies say translates into a better user experience, lower per-token cost and higher energy efficiency.

The release also includes nine small language models and a compact Ministral 3 suite targeted at edge devices. Those compact models are optimized for NVIDIA Spark, RTX PCs and laptops and NVIDIA Jetson devices, and developers can run them via Llama.cpp and Ollama. NVIDIA further integrates the models with open-source NeMo tools for agent lifecycle development, including Data Designer, Customizer, Guardrails and NeMo Agent Toolkit, and has optimized inference frameworks such as NVIDIA TensorRT-LLM, SGLang and vLLM for the mistral 3 family. The models are openly available on leading open-source platforms and cloud providers and are expected to be deployable soon as NVIDIA NIM microservices, offering a path from research to production for enterprise artificial intelligence use cases.

70

Impact Score

How high quality sound shapes virtual communication and trust

As virtual meetings, classes, and content become routine, researchers and audio leaders argue that sound quality is now central to how we judge credibility, intelligence, and trust. Advances in Artificial Intelligence powered audio processing are making clear, unobtrusive sound both more critical and more accessible across work, education, and marketing.

The rise of artificial intelligence tools and their impact on content creators

The article discusses how a growing range of artificial intelligence tools is reshaping workflows and opportunities for content creators, as highlighted by a post from PANews on X. It references tools including Notion Artificial Intelligence, Midjourney, AutoGPT, Agent Scheduler, Cursor, and Flowise, and considers both benefits and challenges for creators.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.