Bytedance has introduced Seedance 2.0 in China, a new artificial intelligence video generation model that can produce up to two-minute, 1080p clips from a single prompt that can include text, images, audio, or video. The system is being tested with a limited group of users inside Jimeng, Bytedance’s artificial intelligence image and video app, and Jianying, its Chinese video editing platform. The launch underscores Bytedance’s expanding ambitions in generative video at a time when global competition in the category is accelerating and China is building momentum in advanced artificial intelligence video technology.
Seedance 2.0 is built to generate multi-shot videos with consistent characters, physics-based motion, sound effects, music, and voiceovers in a single workflow. The model supports native multi-shot storytelling from one prompt and offers phoneme-level lip-sync in more than eight languages, while Bytedance says it delivers results 30% faster than the previous version using its RayFlow optimization system. Beyond raw resolution, the system focuses on continuity and coherence, maintaining motion consistency across cuts, preserving character identity between scenes, and synchronizing audio and visuals during the same generation process to achieve what Bytedance describes as cinematic-quality output at 1080p.
Clips produced with Seedance 2.0 have spread widely on Chinese social platforms, with early users calling it a meaningful step forward in artificial intelligence video generation and likening it to a director that can create full multi-scene videos from a single prompt instead of stitching together separate clips. The debut has also raised concerns about the impact on creative work in editing, scripting, and video production, reflecting ongoing tensions around generative media tools even as enthusiasm grows. Inside China, coverage of the launch surged, artificial intelligence focused application companies drew renewed attention, and related stocks saw gains, while Bytedance continues to restrict access to select users, signaling both cautious deployment and a push to stay at the forefront of rapidly evolving artificial intelligence video technology.
