Alibaba Cloud debuts Qwen3.5 multimodal large language model series

Alibaba Cloud’s Qwen team has released Qwen3.5, a new multimodal large language model series with expanded language coverage, hybrid mixture-of-experts architecture, and broad support across open-source tooling and cloud APIs.

Alibaba Cloud’s Qwen team has released Qwen3.5, a new large language model series positioned as a major upgrade in multimodal capability, efficiency and accessibility. The models integrate advances in multimodal learning, architectural design, reinforcement learning scale and global linguistic coverage to target both developers and enterprises. Qwen3.5 is introduced as a foundation for agents, coding, reasoning and visual understanding workloads, supported by a growing ecosystem of tools, documentation and community channels.

The core of Qwen3.5 is a unified vision language foundation, built through early fusion training on trillions of multimodal tokens to reach cross generational parity with Qwen3 and surpass Qwen3-VL models on benchmarks that span reasoning, coding, agents and visual tasks. An efficient hybrid architecture combines Gated Delta Networks with a sparse Mixture-of-Experts design to deliver high throughput inference with minimal latency and cost overhead. Reinforcement learning is scaled across million-agent environments with progressively complex task distributions to prioritize robust generalization to real world scenarios. Global linguistic coverage is expanded to 201 languages and dialects to support inclusive deployment with more nuanced cultural and regional understanding, while next generation training infrastructure targets near-100% multimodal training efficiency compared to text-only training and uses asynchronous reinforcement learning frameworks for large scale agent scaffolds and environment orchestration.

The first public release dated 2026-02-16 includes a 397B-A17B MoE model, with additional sizes promised. Official model weights are distributed on Hugging Face Hub and ModelScope, with support for both automatic and manual download via tools such as huggingface download, git clone and modelscope download, and environment variables like SGLANG_USE_MODELSCOPE=true or VLLM_USE_MODELSCOPE=true. Qwen3.5 can be accessed through Qwen Chat applications on web, desktop and mobile, as well as via the Qwen API provided by Alibaba Cloud Model Studio, which is compatible with multiple API specifications, including OpenAI and Anthropic. Local and server deployments are supported through Hugging Face Transformers, llama.cpp, MLX for Apple Silicon, SGLang and vLLM, with OpenAI-compatible endpoints exposed at URLs such as http://localhost:8000/v1 and http://localhost:30000/v1, and configuration options including context-length 262144 and max-model-len 262144.

For workflow integration, Qwen Code is offered as an open-source Artificial Intelligence agent for the terminal optimized for Qwen models, helping developers work with large codebases and automation tasks. Qwen Agent provides an open-source agent framework for building applications around instruction following, tool use, planning and memory on top of Qwen models. The maintainers recommend using training frameworks such as UnSloth, Swift and Llama-Factory to finetune Qwen3.5 with methods including SFT, DPO and GRPO. All open-weight models in the series are licensed under Apache 2.0, and formal citation guidance is supplied for academic use. Community support and feedback channels include GitHub issues and discussions, along with Discord and WeChat groups for direct contact with the research and product teams.

68

Impact Score

Lockheed Martin tests Artificial Intelligence enhanced combat identification on F-35

Lockheed Martin has flight tested an Artificial Intelligence enhanced combat identification capability on the F-35, using a tactical model in flight to generate independent threat identifications on the pilot’s display. The Project Overwatch demonstration points to faster decision making and rapid software updates as key elements of future air combat.

Modi touts India’s Artificial Intelligence advances and heritage focus at India Impact Global Artificial Intelligence Summit

Prime minister Narendra Modi used his Mann Ki Baat address to spotlight India’s role at the India Impact Global Artificial Intelligence Summit, highlighting new domestic models and applications in agriculture and cultural preservation. The summit culminated in a New Delhi declaration on Artificial Intelligence backed by 88 countries and international organisations.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.