Seedance 2.0 is a multimodal AI video generation and editing model built for cinematic storytelling. It combines text, images, reference videos, and audio to direct scene composition, character appearance, motion style, and rhythm. Its Omni-Reference workflow supports up to 12 mixed files, including up to 9 images, 3 videos, and 3 MP3 files. The model is designed to maintain character consistency, preserve details, and reduce flicker across frames. It also supports first-and-last-frame interpolation, video extension, and in-video editing, making it suitable for both generation and post-production.