NVIDIA partners with Mistral Artificial Intelligence on open-source Mistral 3 models

NVIDIA and Mistral Artificial Intelligence launched the open-source Mistral 3 family, a suite of multilingual, multimodal models engineered for NVIDIA hardware. The release pairs mixture-of-experts efficiency with optimizations for GB200 NVL72 systems and edge platforms.

NVIDIA and Mistral Artificial Intelligence unveiled the Mistral 3 family, a set of open-source multilingual, multimodal models built and optimized for NVIDIA’s supercomputing and edge platforms. The companies position the release as an enterprise-focused offering that brings mixture-of-experts model architecture to production workloads and edge devices. According to the announcement, the partnership aims to bridge research breakthroughs with real-world applications and to advance what they call the era of distributed intelligence.

The flagship Mistral Large 3 uses a mixture-of-experts approach that activates only the most relevant parts of the model for each query, so it does not run all parameters for every task. The model cites “41B active parameters” and “675B total parameters” as part of that design and supports a “256K context window” for long-form inputs. Benchmarks in the article show significant performance improvements on GB200 NVL72 systems compared with the H200 generation, driven by NVLink coherent memory and wide expert parallelism optimizations that raise throughput, lower per-token costs, and improve energy efficiency.

The release also includes nine compact “Ministral 3” models targeted at edge deployment and optimized for NVIDIA Spark, RTX PCs, laptops, and Jetson devices. Developers can access compact variants through frameworks such as Llama.cpp and Ollama. The open-source orientation is intended to contrast with proprietary models from other vendors and to allow enterprises to customize models using tools named in the article, including Data Designer, Customizer, Guardrails, and the NeMo Agent Toolkit. NVIDIA has optimized multiple inference frameworks for the Mistral 3 family, including TensorRT-LLM, SGLang, and vLLM, and says the models are available on leading cloud platforms and open-source repositories with NVIDIA NIM microservices deployment coming soon.

The article frames the partnership as a democratizing move for enterprise Artificial Intelligence, offering high-performance open-source alternatives and enterprise-grade tooling. It highlights immediate developer access to state-of-the-art multimodal capabilities while noting the true test will be adoption by enterprises that need production-ready, scalable, and customizable models without closed-source constraints.

72

Impact Score

Artificial intelligence becomes a lever for transformation in Africa

African researchers and institutions are positioning artificial intelligence as a tool to tackle structural challenges in health, education, agriculture and governance, while pushing for data sovereignty and local language inclusion. The continent faces hurdles around skills, infrastructure and control of data but is exploring frugal technological models tailored to its realities.

Microsoft unveils Maia 200 artificial intelligence inference accelerator

Microsoft has introduced Maia 200, a custom artificial intelligence inference accelerator built on a 3 nm process and designed to improve the economics of token generation for large models, including GPT-5.2. The chip targets higher performance per dollar for services like Microsoft Foundry and Microsoft 365 Copilot while supporting synthetic data pipelines for next generation models.

Samsung’s 2 nm node progress could revive foundry business and attract Qualcomm

Samsung Foundry’s 2 nm SF2 process is reportedly stabilizing at around 50% yields, positioning the Exynos 2600 as a key proof of concept and potentially helping the chip division return to profit. New demand from Tesla Artificial Intelligence chips and possible deals with Qualcomm and AMD are seen as central to the turnaround.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.