NVIDIA and Mistral Artificial Intelligence partner to accelerate new family of open models

NVIDIA and Mistral Artificial Intelligence announced a partnership to optimize the Mistral 3 family of open-source multilingual, multimodal models across NVIDIA supercomputing and edge platforms. The collaboration highlights Mistral Large 3, a mixture-of-experts model designed to improve efficiency and accuracy for enterprise artificial intelligence deployments starting Tuesday, Dec. 2.

Mistral Artificial Intelligence yesterday announced the Mistral 3 family of open-source multilingual, multimodal models and said the new models are optimized across NVIDIA supercomputing and edge platforms. The company highlighted Mistral Large 3 as a mixture-of-experts model that activates only parts of the model with the most impact instead of firing up every neuron for every token. According to the announcement, that targeted activation delivers efficiency that allows scale without waste and accuracy without compromise, positioning enterprise artificial intelligence as practical for real-world use.

Mistral Large 3 is described with exact capacity figures of 41B active parameters, 675B total parameters and a large 256K context window. The models will be available everywhere, from the cloud to the data center to the edge, starting Tuesday, Dec. 2. The release frames the combination of Mistral Artificial Intelligence architecture with NVIDIA hardware as a route to deploy and scale massive models more efficiently by leveraging advanced parallelism and hardware optimizations built into NVIDIA GB200 NVL72 systems.

The companies present the collaboration as a step toward what Mistral Artificial Intelligence calls distributed intelligence, aiming to bridge research breakthroughs and practical applications. The announcement emphasizes enterprise-focused accuracy and efficiency, and the partnership centers on making the Mistral 3 family broadly deployable on both supercomputing and edge infrastructure. Technical details and deployment guidance are referenced in the partner announcement and the linked NVIDIA developer blog about NVIDIA-accelerated Mistral 3 open models.

68

Impact Score

Microsoft unveils Maia 200 artificial intelligence inference accelerator

Microsoft has introduced Maia 200, a custom artificial intelligence inference accelerator built on a 3 nm process and designed to improve the economics of token generation for large models, including GPT-5.2. The chip targets higher performance per dollar for services like Microsoft Foundry and Microsoft 365 Copilot while supporting synthetic data pipelines for next generation models.

Samsung’s 2 nm node progress could revive foundry business and attract Qualcomm

Samsung Foundry’s 2 nm SF2 process is reportedly stabilizing at around 50% yields, positioning the Exynos 2600 as a key proof of concept and potentially helping the chip division return to profit. New demand from Tesla Artificial Intelligence chips and possible deals with Qualcomm and AMD are seen as central to the turnaround.

How high quality sound shapes virtual communication and trust

As virtual meetings, classes, and content become routine, researchers and audio leaders argue that sound quality is now central to how we judge credibility, intelligence, and trust. Advances in Artificial Intelligence powered audio processing are making clear, unobtrusive sound both more critical and more accessible across work, education, and marketing.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.