OpenAI launches open gpt-oss models optimized for NVIDIA RTX GPUs

OpenAI´s new gpt-oss models, optimized for NVIDIA GPUs, accelerate local artificial intelligence applications across RTX-powered devices.

OpenAI, in a partnership with NVIDIA, has introduced new open-source gpt-oss language models designed for seamless deployment on NVIDIA´s RTX and RTX PRO GPUs. The gpt-oss-20b and gpt-oss-120b models are tailored for versatile reasoning tasks, supporting applications ranging from web search and coding assistance to comprehensive document analysis. These models, engineered for flexible local and cloud inference, can handle up to 131,072 context lengths—enabling sophisticated chain-of-thought and instruction-following capabilities. NVIDIA optimizations ensure top performance, reportedly achieving up to 256 tokens per second on the high-end GeForce RTX 5090 GPU.

Developers and artificial intelligence enthusiasts can access and run these models on RTX-powered machines via popular frameworks and tools such as Ollama, llama.cpp, and Microsoft AI Foundry Local. Ollama, in particular, offers a streamlined user experience, facilitating out-of-the-box support for OpenAI´s open-weight models with a modern interface and features like multimodal inputs and file integration. The models leverage a mixture-of-experts architecture and take advantage of the MXFP4 precision format, which boosts efficiency and reduces resource demands without sacrificing model quality. The training took place on NVIDIA H100 GPUs, underscoring the scalability and performance of the NVIDIA hardware ecosystem from cloud infrastructures to localized desktop PCs.

NVIDIA has actively engaged the open-source community to further refine model performance on their GPUs, contributing improvements like CUDA Graph implementations and CPU overhead reductions to projects such as llama.cpp and the GGML tensor library. Windows developers also benefit from native access through Microsoft AI Foundry Local, which utilizes ONNX Runtime optimized with CUDA and plans forthcoming support for NVIDIA TensorRT. These advancements mark a significant opening for developers looking to embed high-performance artificial intelligence reasoning into Windows applications and signal a broader shift toward on-device artificial intelligence acceleration powered by deep collaborations between industry leaders like OpenAI and NVIDIA.

76

Impact Score

Saudi Artificial Intelligence startup launches Arabic LLM

Misraj Artificial Intelligence unveiled Kawn, an Arabic large language model, at AWS re:Invent and launched Workforces, a platform for creating and managing Artificial Intelligence agents for enterprises and public institutions.

Introducing Mistral 3: open artificial intelligence models

Mistral 3 is a family of open, multimodal and multilingual Artificial Intelligence models that includes three Ministral edge models and a sparse Mistral Large 3 trained with 41B active and 675B total parameters, released under the Apache 2.0 license.

NVIDIA and Mistral Artificial Intelligence partner to accelerate new family of open models

NVIDIA and Mistral Artificial Intelligence announced a partnership to optimize the Mistral 3 family of open-source multilingual, multimodal models across NVIDIA supercomputing and edge platforms. The collaboration highlights Mistral Large 3, a mixture-of-experts model designed to improve efficiency and accuracy for enterprise artificial intelligence deployments starting Tuesday, Dec. 2.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.