Seedance 2.0 is ByteDance’s next-generation multimodal video generation model, now available inside ComfyUI. It accepts text, images, video, and audio as unified inputs and produces high-quality video with synced audio, consistent characters, and cinematic camera motion in a single pass.Documentation Index
Fetch the complete documentation index at: https://dripart-mintlify-86baf657.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Key capabilities
- Multimodal in — Prompt with text plus images, video, and audio
- Audio-video sync — Generate video and audio together
- Directing control — Camera moves and shot pacing stay controllable
- Consistency — Keep characters and scenes stable across a clip
- Editing + extend — Edit footage or extend clips without starting over
Available workflows
- Real Human — Generate videos featuring real people with identity consistency after a one-time verification. See Seedance 2.0 Real Human for details.
Seedance 2.0 Real Human (liveness verification)
Learn how verification works and how to reuse Group ID / Asset ID.
Text to video (T2V)
Generate a video from a text prompt, with Seedance 2.0 handling scene, motion, and pacing.Run Text-to-Video on Cloud
Try the Text-to-Video workflow instantly on Comfy Cloud.
Download Text-to-Video workflow
Download the workflow JSON.
Reference to video (R2V)
Use reference images, video, or audio to guide look, motion, and rhythm while keeping results coherent.Run Reference-to-Video on Cloud
Try the Reference-to-Video workflow instantly on Comfy Cloud.
Download Reference-to-Video workflow
Download the workflow JSON.
First-last-frame to video (FLF2V)
Provide a starting frame and ending frame, and Seedance 2.0 generates the motion and transitions between them.Run FLF2V on Cloud
Try the First-Last-Frame-to-Video workflow instantly on Comfy Cloud.
Download FLF2V workflow
Download the workflow JSON.