OpenAI Launches GPT-4.1 Family with Expanded Context and Enhanced Coding Abilities

OpenAI unveils the GPT-4.1 model family, delivering massive context windows and major performance gains for coding, instruction following, and long-form content handling in Artificial Intelligence.

OpenAI has introduced its latest large language model family, GPT-4.1, which encompasses three variants: GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano. These models are designed to significantly advance capabilities in code generation, instruction adherence, and context comprehension. Notably, all versions support a massive one-million-token context window, enabling them to seamlessly process expansive documents, complete code repositories, or comprehensive video transcripts—pushing the boundaries of what generative models can handle at scale.

Performance metrics indicate marked improvements over previous models. For example, GPT-4.1 achieved a 54.6% score on the SWE-bench Verified benchmark, surpassing GPT-4o by more than 21 points and establishing itself as a formidable solution for real-world software engineering tasks. Its strengths include navigating codebases, producing patches that are instantly testable, and correctly interpreting code diffs without the need for further modifications. The model also excelled in instruction following, notching a 38.3% on Scale´s MultiChallenge benchmark—a 10.5-point leap over GPT-4o—demonstrating better adherence to complex and multi-step user prompts with accurate formatting as requested.

The GPT-4.1 models also exhibit outstanding long-context reasoning, as reflected in the Video-MME benchmark, where GPT-4.1 scored 72.0% on the ´long, no subtitles´ category, outperforming its predecessor by 6.7 points. This capability is driven by the expanded context window, which allows the models to synthesize information spread across extensive inputs, including unstructured text and lengthy media. OpenAI attributes these improvements to its continued focus on developer collaboration, streamlined tuning for practical use cases, cost optimization, and reduced latency—particularly with the lighter variants. GPT-4.1 mini lowers operational costs by 83% and cuts response times in half relative to GPT-4o, while GPT-4.1 nano is marketed as ideal for tasks such as classification or autocompletion, where speed and affordability are paramount. All three models are now accessible through OpenAI´s API. While the GPT-4.1 family is not directly available in ChatGPT initially, many core enhancements have already been incorporated in the refreshed GPT-4o version. OpenAI has advised developers using the GPT-4.5 Preview to transition to these new models ahead of the July 14, 2025 retirement date, underlining the company´s ongoing shift to more agile and efficient model architectures.

77

Impact Score

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.

Please check your email for a Verification Code sent to . Didn't get a code? Click here to resend