Alibaba Unveils Qwen3: New Standard in Open-Source Large Language Models

Alibaba launches Qwen3, a groundbreaking open-source large language model family, pushing Artificial Intelligence innovation with hybrid reasoning and multilingual support.

Alibaba has introduced Qwen3, its latest generation of open-sourced large language models, establishing a new benchmark in Artificial Intelligence innovation. Qwen3 comprises six dense models and two Mixture-of-Experts (MoE) models, with parameter scales ranging from 0.6 billion to 235 billion, now freely accessible worldwide. Developers can leverage these models for diverse applications spanning mobile devices, smart glasses, autonomous vehicles, and robotics. All Qwen3 models are open sourced and available on platforms such as Hugging Face, GitHub, and ModelScope, ensuring broad developer access and fostering global collaboration.

Qwen3 pioneers Alibaba´s debut in hybrid reasoning models, uniting traditional large language model capabilities with advanced dynamic reasoning. The models are engineered to switch flexibly between ´thinking´ mode for complex, multi-step tasks—such as mathematics, coding, and logical deduction—and ´non-thinking´ mode for fast, general-purpose outputs. For API users, Qwen3 provides granular control over the duration of its reasoning (up to 38,000 tokens), optimizing performance while containing computational costs. The flagship model, Qwen3-235B-A22B MoE, notably reduces operational expenses compared to other state-of-the-art models, reaffirming Alibaba´s commitment to affordable, high-performance Artificial Intelligence.

The Qwen3 suite is trained on an expansive dataset of 36 trillion tokens, twice that of its predecessor, resulting in significant advancements in reasoning, instruction following, tool use, and multilingual tasks. Key features include superior support for 119 languages and dialects, robust agent-task integration through native Model Context Protocol and function-calling, leading benchmark scores in mathematics and coding, and enhanced human alignment for natural dialogue and creative applications. The models achieved top-tier results across industry benchmarks including AIME25, LiveCodeBench, BFCL, and Arena-Hard, driven by a complex four-stage training process focused on reinforcement learning and reasoning fusion.

Open access is central to Qwen3´s release, as Alibaba aims to accelerate Artificial Intelligence innovation across industries. Since inception, the Qwen model family has recorded over 300 million downloads worldwide, with more than 100,000 derivative models created by the developer community. Qwen3 already underpins Alibaba´s Artificial Intelligence super assistant app, Quark, and will soon be available via its Model Studio platform. This comprehensive open-source approach signals Alibaba´s ambition to redefine the global landscape of large language models and hybrid Artificial Intelligence solutions.

79

Impact Score

AMD Instinct MI355X passes 1M tokens/sec in MLPerf 6.0

AMD says its MLPerf Inference 6.0 submission combined competitive single-node performance, multinode scale, and broader partner reproducibility. The company also highlighted first-time workloads and a heterogeneous submission spanning different systems and geographies.

Mistral Artificial Intelligence secures 830 million for Paris data center

Mistral Artificial Intelligence has raised 830 million in debt financing to operate a new data center outside Paris as it pushes for a larger independent cloud and compute footprint in Europe. The facility will be powered by thousands of Nvidia chips and forms part of a broader regional expansion plan.

Anthropic signals cybersecurity push with Mythos

Anthropic’s reported Mythos model points to a deeper push into cybersecurity and a broader effort to expand beyond coding-focused offerings. Enterprises are likely to treat it as one option among many rather than a single answer to their Artificial Intelligence needs.

Nvidia targets laptops with Arm chips at Computex 2026

Nvidia is preparing to introduce Arm-based laptop processors at Computex 2026 as it pushes into the consumer PC market. The new chips, developed with MediaTek, are aimed at thin-and-light systems with strong graphics and Artificial Intelligence performance.

OpenAI reports lower hallucination rates for GPT-5

OpenAI says GPT-5 produces fewer false claims than earlier models, especially when it can browse the web. The gains look smaller without web access, underscoring how much reliability still depends on live sourcing.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.