Create Talking videos with Multi Talk
A video generation workflow that creates talking videos using the Wan2.1 Text-to-Video and MultiTalk models, syncing audio from a source image or video input. It features LLM integration for automatic prompt generation, image/video resizing, audio processing with Wav2Vec, and video post-processing effects before combining into a final video.
Workflow Properties
Basic information about this workflow
Nodes
72
Connections
69
Version
0.4
Groups
Workflow Preview
Environment Configuration
This workflow includes pre-configured environment settings
Setting | Value |
---|---|
ComfyUI Version | 483b3e62e00624fc52da8ad67e88f863abe975d2 |
GPU Type | L40S |
Python Version | 3.11 |
Custom Nodes | ComfyUI Deploy ComfyUI-VideoHelperSuite ComfyUI-WanVideoWrapper ComfyUI-KJNodes ComfyUI-GIMM-VFI Comfyroll Studio rgthree-comfy ComfyUI-LTXVideo ComfyUI-WanResolutionSelector ComfyUI Frame Interpolation Various ComfyUI Nodes by Type ComfyUI-SeedVR2_VideoUpscaler.git audio-separation-nodes-comfyui comfyui-llm-toolkit ComfyUI-RMBG Comfyui_LG_Tools ComfyUI-ChatterboxTTS |
How to Use
Click "Use This Workflow" to import it into your ComfyDeploy workspace with pre-configured environment settings
Download the JSON file to use in your own ComfyUI instance
Need Help?
Join our vibrant community for support, tips, and workflow sharing
Free to join • 24/7 community support • 3,000+ members