Alibaba Unveils Qwen3: New Standard in Open-Source Large Language Models

Alibaba launches Qwen3, a groundbreaking open-source large language model family, pushing Artificial Intelligence innovation with hybrid reasoning and multilingual support.

Alibaba has introduced Qwen3, its latest generation of open-sourced large language models, establishing a new benchmark in Artificial Intelligence innovation. Qwen3 comprises six dense models and two Mixture-of-Experts (MoE) models, with parameter scales ranging from 0.6 billion to 235 billion, now freely accessible worldwide. Developers can leverage these models for diverse applications spanning mobile devices, smart glasses, autonomous vehicles, and robotics. All Qwen3 models are open sourced and available on platforms such as Hugging Face, GitHub, and ModelScope, ensuring broad developer access and fostering global collaboration.

Qwen3 pioneers Alibaba´s debut in hybrid reasoning models, uniting traditional large language model capabilities with advanced dynamic reasoning. The models are engineered to switch flexibly between ´thinking´ mode for complex, multi-step tasks—such as mathematics, coding, and logical deduction—and ´non-thinking´ mode for fast, general-purpose outputs. For API users, Qwen3 provides granular control over the duration of its reasoning (up to 38,000 tokens), optimizing performance while containing computational costs. The flagship model, Qwen3-235B-A22B MoE, notably reduces operational expenses compared to other state-of-the-art models, reaffirming Alibaba´s commitment to affordable, high-performance Artificial Intelligence.

The Qwen3 suite is trained on an expansive dataset of 36 trillion tokens, twice that of its predecessor, resulting in significant advancements in reasoning, instruction following, tool use, and multilingual tasks. Key features include superior support for 119 languages and dialects, robust agent-task integration through native Model Context Protocol and function-calling, leading benchmark scores in mathematics and coding, and enhanced human alignment for natural dialogue and creative applications. The models achieved top-tier results across industry benchmarks including AIME25, LiveCodeBench, BFCL, and Arena-Hard, driven by a complex four-stage training process focused on reinforcement learning and reasoning fusion.

Open access is central to Qwen3´s release, as Alibaba aims to accelerate Artificial Intelligence innovation across industries. Since inception, the Qwen model family has recorded over 300 million downloads worldwide, with more than 100,000 derivative models created by the developer community. Qwen3 already underpins Alibaba´s Artificial Intelligence super assistant app, Quark, and will soon be available via its Model Studio platform. This comprehensive open-source approach signals Alibaba´s ambition to redefine the global landscape of large language models and hybrid Artificial Intelligence solutions.

79

Impact Score

Google Nano Banana 2 pushes context aware visual artificial intelligence forward

Google Nano Banana 2, built on the Gemini 3.1 Flash Image architecture, focuses on real-time, context-aware image generation with accurate text rendering and fast 4K output across Google’s ecosystem. The model targets practical uses like education, marketing, and enterprise content where precision and localisation matter as much as visual quality.

How publishers integrate artificial intelligence across newsrooms and business teams

Major publishers including Dow Jones, Business Insider, Forbes and People Inc. are rapidly expanding artificial intelligence across workflows, with a strong tilt toward generative tools but firm guardrails on news content creation. Internal automation, personalization and new audio formats are emerging as key focus areas heading into 2026.

Nvidia halts China H200 shipments and shifts capacity to Vera Rubin GPUs

Nvidia has stopped producing certain Artificial Intelligence accelerators for China and is reallocating foundry capacity at TSMC to its next-generation Vera Rubin platform. The move highlights shifting priorities in Nvidia’s data center roadmap under changing market and regulatory conditions.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.