Seedance 2.0 core
Direct API access to ByteDance's flagship. Multimodal inputs: text, image, audio, video. Native 1080p. 15-second clips. #1 on leaderboards.
Not a demo, not a research preview, a working pipeline your team can rely on. Here's what's actually inside.
Direct API access to ByteDance's flagship. Multimodal inputs: text, image, audio, video. Native 1080p. 15-second clips. #1 on leaderboards.
Up to 9 reference images, 3 video clips, 3 audio clips in a single prompt. Composition, motion and audio planned in one pass.
Seedance renders audio together with video in one pass. Your clip ships with foley, ambience and dialogue timed to the motion.
Alibaba's open-source video model lands on the platform the moment the API is public. Same workspace, same tokens, one extra model in the router.
Use everything you generate in paid ads, YouTube, TikTok, client work. The license sits in our Terms, not marketing copy.
Share brand kits, recycle tokens, collaborate on storyboards in real-time. Per-seat audit logs on Studio tier.
Plain English. A sentence or a shot list. Attach a reference image or clip if you have one.
Every prompt is matched to the best-fit model in our render pool. You don't pick, you just get the right output.
1080p, up to 16 seconds per clip. Batch as many as you want; we queue them in parallel.
MP4, ProRes or WebM. Keep the seed, tweak the prompt, re-render for free.