Introducing Mistral 3: open artificial intelligence models

Mistral 3 is a family of open, multimodal and multilingual Artificial Intelligence models that includes three Ministral edge models and a sparse Mistral Large 3 trained with 41B active and 675B total parameters, released under the Apache 2.0 license.

Today Mistral announces Mistral 3, a family of open Artificial Intelligence models released under the Apache 2.0 license. The release includes three small, dense Ministral models-14B, 8B, and 3B-and Mistral Large 3, a sparse mixture-of-experts pretrained with 41B active and 675B total parameters. Mistral positions the Ministral series as the best performance-to-cost ratio in its category and describes Large 3 as its most capable model to date, joining the ranks of frontier instruction-fine-tuned open-weight models.

Mistral Large 3 was trained from scratch on 3000 of NVIDIA’s H200 GPUs and debuts as a mixture-of-experts model that, after post-training, achieves parity with top instruction-tuned open-weight models on general prompts while also showing image understanding and strong multilingual conversation performance. The model ranks #2 in the OSS non-reasoning models category (#6 amongst OSS models overall) on the LMArena leaderboard. Mistral releases both the base and instruction fine-tuned versions today and says a reasoning variant is coming soon.

To improve accessibility and performance, Mistral worked with vLLM, Red Hat, and NVIDIA to deliver optimized checkpoints and runtime support. The team is releasing a checkpoint in NVFP4 format built with llm-compressor, and Large 3 can run efficiently on Blackwell NVL72 systems and on a single 8×A100 or 8×H100 node using vLLM. NVIDIA engineers added support for TensorRT-LLM and SGLang, integrated Blackwell attention and MoE kernels, and collaborated on prefill/decode disaggregated serving and speculative decoding to enable efficient long-context, high-throughput deployments on GB200 NVL72 and beyond, with edge pathways to DGX Spark, RTX PCs and laptops, and Jetson devices.

The Ministral 3 series targets edge and local use cases with three sizes-3B, 8B, and 14B parameters-and each size is available in base, instruct, and reasoning variants with image understanding. Mistral says Ministral instruct models often match or exceed comparable models while producing an order of magnitude fewer tokens in real-world use, and Ministral reasoning variants can trade throughput for accuracy (for instance 85% on AIME ‘25 with the 14B variant). The family supports multimodal and multilingual workflows across 40+ native languages and spans models from 3B to 675B parameters for edge to enterprise needs.

Mistral 3 is available today on Mistral AI Studio, Amazon Bedrock, Azure Foundry, Hugging Face (Large 3 and Ministral collections), Modal, IBM WatsonX, OpenRouter, Fireworks, Unsloth AI, and Together AI, with NVIDIA NIM and AWS SageMaker coming soon. For organizations that need tailored deployments, Mistral offers custom model training services and documentation and invites developers to explore model docs, the AI Governance Hub, and the company’s distribution and support channels.

70

Impact Score

NVIDIA and Mistral Artificial Intelligence partner to accelerate new family of open models

NVIDIA and Mistral Artificial Intelligence announced a partnership to optimize the Mistral 3 family of open-source multilingual, multimodal models across NVIDIA supercomputing and edge platforms. The collaboration highlights Mistral Large 3, a mixture-of-experts model designed to improve efficiency and accuracy for enterprise artificial intelligence deployments starting Tuesday, Dec. 2.

Micron to exit Crucial consumer business, ending retail SSD and DRAM sales

Micron will wind down its Crucial consumer business and stop retail sales of Crucial-branded SSDs and memory after fiscal Q2 2026 (ending February 2026). The company said the move reallocates capacity to meet surging Artificial Intelligence-related demand in the data center and to prioritize enterprise and hyperscale customers.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.