Alibaba has introduced wan2.2-animate, a new open-source digital-human video generation model optimized for character animation and replacement. The release, published on Sept. 22, 2025, is the second open-source entry in the wan2.2 series to appear within a month, underscoring the companyu2019s ongoing work in Artificial Intelligence-powered digital human tools. wan2.2-animate accepts a character image and a supporting reference video, then synthesizes a new video that mirrors facial expressions and body movements from the reference.
the model can replace a character in an existing video with one from a provided source image while preserving the original expressions and motion trajectories. Alibaba describes an approach that deconstructs human motion into fundamental skeletal patterns and separately captures facial expressions from source videos. that decomposition and reconstruction give creators precise motion control and enable replication of subtle movements that normally require more manual animation effort. the model also copies the original lighting and color characteristics to keep the inserted character consistent with the scene.
to address integration challenges, wan2.2-animate incorporates an auxiliary relighting Low-Rank Adaptation (LoRA) technology that automatically adjusts appearance to match the target environment, from simple shadows to complex lighting conditions. Alibaba positions the model as a productivity tool for creators across film, television, short-form video, gaming, and advertising, with the aim of simplifying workflows and reducing production costs. wan2.2-animate is available for download on Hugging Face, GitHub, and ModelScope. last monthu2019s wan2.2-s2v (speech-to-video) converted portrait photos into film-quality avatars that can speak, sing, and perform. Alibaba says the wan series has exceeded 30 million downloads across open-source communities and third-party platforms to date.