Alibaba Unveils Qwen3: New Standard in Open-Source Large Language Models

Alibaba launches Qwen3, a groundbreaking open-source large language model family, pushing Artificial Intelligence innovation with hybrid reasoning and multilingual support.

Alibaba has introduced Qwen3, its latest generation of open-sourced large language models, establishing a new benchmark in Artificial Intelligence innovation. Qwen3 comprises six dense models and two Mixture-of-Experts (MoE) models, with parameter scales ranging from 0.6 billion to 235 billion, now freely accessible worldwide. Developers can leverage these models for diverse applications spanning mobile devices, smart glasses, autonomous vehicles, and robotics. All Qwen3 models are open sourced and available on platforms such as Hugging Face, GitHub, and ModelScope, ensuring broad developer access and fostering global collaboration.

Qwen3 pioneers Alibaba´s debut in hybrid reasoning models, uniting traditional large language model capabilities with advanced dynamic reasoning. The models are engineered to switch flexibly between ´thinking´ mode for complex, multi-step tasks—such as mathematics, coding, and logical deduction—and ´non-thinking´ mode for fast, general-purpose outputs. For API users, Qwen3 provides granular control over the duration of its reasoning (up to 38,000 tokens), optimizing performance while containing computational costs. The flagship model, Qwen3-235B-A22B MoE, notably reduces operational expenses compared to other state-of-the-art models, reaffirming Alibaba´s commitment to affordable, high-performance Artificial Intelligence.

The Qwen3 suite is trained on an expansive dataset of 36 trillion tokens, twice that of its predecessor, resulting in significant advancements in reasoning, instruction following, tool use, and multilingual tasks. Key features include superior support for 119 languages and dialects, robust agent-task integration through native Model Context Protocol and function-calling, leading benchmark scores in mathematics and coding, and enhanced human alignment for natural dialogue and creative applications. The models achieved top-tier results across industry benchmarks including AIME25, LiveCodeBench, BFCL, and Arena-Hard, driven by a complex four-stage training process focused on reinforcement learning and reasoning fusion.

Open access is central to Qwen3´s release, as Alibaba aims to accelerate Artificial Intelligence innovation across industries. Since inception, the Qwen model family has recorded over 300 million downloads worldwide, with more than 100,000 derivative models created by the developer community. Qwen3 already underpins Alibaba´s Artificial Intelligence super assistant app, Quark, and will soon be available via its Model Studio platform. This comprehensive open-source approach signals Alibaba´s ambition to redefine the global landscape of large language models and hybrid Artificial Intelligence solutions.

79

Impact Score

Nvidia, AMD and Broadcom face off over artificial intelligence chip growth through 2026

Nvidia, AMD and Broadcom are pursuing sharply different strategies in artificial intelligence computing, with Nvidia maintaining a dominant lead, AMD fighting to close the gap, and Broadcom betting on custom accelerators. Valuations, growth forecasts and product positioning suggest Nvidia and Broadcom could offer stronger upside than AMD heading into 2026.

Dwelly raises £69 million to roll up U.K. lettings agencies with artificial intelligence

London based startup Dwelly has secured £69 million to acquire independent U.K. lettings agencies and plug them into an artificial intelligence driven operating platform aimed at speeding up rentals and property maintenance. The company is betting that owning agencies, rather than just selling them software, will unlock both higher margins and a captive customer base.

Research on introspection and self-knowledge in large language models

Researchers are probing how large language models understand their own knowledge, behavior, and internal states, and how reliably they can report on themselves. Recent work spans calibration, situational awareness, introspective self-modeling, mechanistic interpretability, and debates about the limits of model self-reports.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.