Alibaba Unveils Qwen3: New Standard in Open-Source Large Language Models

Alibaba launches Qwen3, a groundbreaking open-source large language model family, pushing Artificial Intelligence innovation with hybrid reasoning and multilingual support.

Alibaba has introduced Qwen3, its latest generation of open-sourced large language models, establishing a new benchmark in Artificial Intelligence innovation. Qwen3 comprises six dense models and two Mixture-of-Experts (MoE) models, with parameter scales ranging from 0.6 billion to 235 billion, now freely accessible worldwide. Developers can leverage these models for diverse applications spanning mobile devices, smart glasses, autonomous vehicles, and robotics. All Qwen3 models are open sourced and available on platforms such as Hugging Face, GitHub, and ModelScope, ensuring broad developer access and fostering global collaboration.

Qwen3 pioneers Alibaba´s debut in hybrid reasoning models, uniting traditional large language model capabilities with advanced dynamic reasoning. The models are engineered to switch flexibly between ´thinking´ mode for complex, multi-step tasks—such as mathematics, coding, and logical deduction—and ´non-thinking´ mode for fast, general-purpose outputs. For API users, Qwen3 provides granular control over the duration of its reasoning (up to 38,000 tokens), optimizing performance while containing computational costs. The flagship model, Qwen3-235B-A22B MoE, notably reduces operational expenses compared to other state-of-the-art models, reaffirming Alibaba´s commitment to affordable, high-performance Artificial Intelligence.

The Qwen3 suite is trained on an expansive dataset of 36 trillion tokens, twice that of its predecessor, resulting in significant advancements in reasoning, instruction following, tool use, and multilingual tasks. Key features include superior support for 119 languages and dialects, robust agent-task integration through native Model Context Protocol and function-calling, leading benchmark scores in mathematics and coding, and enhanced human alignment for natural dialogue and creative applications. The models achieved top-tier results across industry benchmarks including AIME25, LiveCodeBench, BFCL, and Arena-Hard, driven by a complex four-stage training process focused on reinforcement learning and reasoning fusion.

Open access is central to Qwen3´s release, as Alibaba aims to accelerate Artificial Intelligence innovation across industries. Since inception, the Qwen model family has recorded over 300 million downloads worldwide, with more than 100,000 derivative models created by the developer community. Qwen3 already underpins Alibaba´s Artificial Intelligence super assistant app, Quark, and will soon be available via its Model Studio platform. This comprehensive open-source approach signals Alibaba´s ambition to redefine the global landscape of large language models and hybrid Artificial Intelligence solutions.

79

Impact Score

Nvidia acquisition of SchedMD raises Slurm neutrality concerns

Nvidia’s purchase of SchedMD has given it control of Slurm, an open-source scheduler that sits at the center of many supercomputing and large-model training systems. Researchers and engineers are watching for signs that support could tilt toward Nvidia hardware over AMD and Intel alternatives.

Mustafa Suleyman says Artificial Intelligence compute growth is still accelerating

Mustafa Suleyman argues that Artificial Intelligence development is being propelled by simultaneous advances in chips, memory, networking, and software efficiency rather than nearing a hard limit. He contends that rising compute capacity and falling deployment costs will push systems beyond chatbots toward more capable agents.

China and the US are leading different Artificial Intelligence races

The US leads in large language models and advanced chips, while China has built a major advantage in robotics and humanoid manufacturing. That balance is shifting as Chinese developers narrow the gap in model performance and both countries push to combine software and machines.

Congress weighs Artificial Intelligence transparency rules

Bipartisan lawmakers are pushing a federal transparency standard for the largest Artificial Intelligence models as Congress works on a broader national framework. The proposal aims to increase public trust while avoiding stricter state-by-state requirements and heavier regulation.

Report finds California creative job losses are not driven by Artificial Intelligence

New research from Otis College of Art and Design finds California’s recent creative industry job losses stem from cost pressures and structural shifts, not direct worker displacement by generative Artificial Intelligence. The technology is changing workflows and expectations, but it is largely replacing tasks rather than entire jobs.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.