Explore Community Workflows
Discover amazing workflows created by the ComfyUI community. Download, use, and get inspired by creative workflows from artists and developers around the world.
waifu_simper
A image-to-image and image-to-video workflow named 'waifu_simper' takes a picture of your waifu and another of you and make you go for a date any-where in the world
GrokTrend_dance
A comprehensive AI-driven workflow for image editing and image-to-video makes your waifu move like you wish her to

Wan Image ALL-IN-ONE
Use Wan model to generate high quality images. Text-to-Image, Image-to-Image and Controlnets included

ReStyle Images Kontext Max
image-to-image workflow to restyle images using Kontext Max and LLM-toolkit signature style presets

kontext_presets_api
A multi-image image-to-image workflow that uses an LLM toolkit to manage prompts and generate replicate Kontext presets adding even more features

Kontext Presets Local (nunchaku)
Apply BFL Kontext Presets plus CD presets to your images nunchaku

Re-Styler
Generate Creative Profile Pictures Easily and Fast (Other pictures work also work)

Nunchaku fast & simple KONTEXT editing
A multi-image editing workflow that uses the Flux/Kontext model for image-to-image transformations and object integration, featuring optional background removal, image resizing, and LLM-assisted prompt generation, utilizing the Euler sampler.
iNFINITE AVATAR
Generates talking character videos and dialogue from on the fly cloned voice video can be as long as the text you feed it by taking a reference image and an optional video, using an LLM to generate dialogue and video prompts, performing text-to-speech, and combining the generated audio and video. It utilizes WanVideo models (including lightx2v and Fun-Reward LoRAs), Uni3C ControlNet, MultiTalk for audio processing, and VHS helper nodes.

Voice-Cloning & Text-To-Speech
A Voice Cloning and Text-to-Speech workflow that uses audio processing nodes like `LoadAudio`, `AudioCrop`, and `AudioSeparation`, followed by voice cloning with `FL_ChatterboxVC` and text-to-speech with `Chatterbox_TTS` based on reference audio inputs.

Product-Generator_V2
GenerĂ¡ imagenes lista para usar en redes o tu ecommerce de cualquier producto

OmniGen2 Natural language Image Editing
Omnigen can generate and edit images using Natural Language it is really an experimental Multimodal model without major pretentions but it can produce interesting outputs and be useful in some cases

Kontext Simple Single image edit
A single image edit workflow for Kontext base model with Negative Attention Guidance (NAG) to restyle or edit an input image, featuring background removal and AI-assisted prompt generation via an LLM.

Kontext-Edit images with guided Auto-Prompts
A **image editing** workflow that leverages a Flux Kontext and NAG technique, enhanced by an LLM for automated prompt generation, to edit a source image based on a reference image using reference latent and flux guidance techniques.

Thumbnail Generation
Generate Youtube thumbnails with your face and your video title!

Kontext & LLM-Toolkit Auto Prompts with 2 Inputs
A an AI-assisted image-to-image Leveraging FLUX KONTEXT DEV this workflow that utilizes an LLM-Toolkit to generate detailed prompts for image transformation tasks, such as changing backgrounds or applying styles, based on two input images, using the Flux model and image resizing. Key components include the Flux model, a DualCLIPLoader, VAE encoding/decoding, Image resizing with ImageResizeKJv2, AI-driven prompt generation via LLMToolkitTextGeneratorStream and PromptManager nodes, and image processing with BiRefNetRMBG nodes.

OmmniGen2 Image Editing, Re-Stylization and Generation
A multi-image workflow using the OmniGen2 model to transform input images into different styles (e.g., realistic, anime) and combine elements based on text prompts, supporting single or dual image inputs and handling image resizing and stitching.
Product and Identity Video With MAGREF
A Image-to-Video workflow that generates a video from multiple input images and text prompts, utilizing the Wan2_1-Wan-I2V-MAGREF-14B base model and a Wan21 LoRA for image-to-video encoding. It includes pre-processing steps like image resizing and background removal, uses the WanVideoSampler, and applies post-processing effects such as color adjustment and film grain before combining frames into a video.
Remove Objects and Watermarks from Videos (WAN-MiniMaxRemover)
A video-to-video workflow that removes objects from a video using the WanVideo MiniMaxRemover model, applying masks generated via Florence2 and BiRefNetRMBG, and utilizing specific WanVideo LoRAs and the unipc sampler for the generation process.
Convert 2D to 3D with Hy3d V2.1
A 3D generation workflow that creates, processes, and textures a 3D mesh from an input image using the Hunyuan3D models (`hunyuan3d-dit-v2-1/model.fp16.ckpt` and `hunyuan3d-vae-v2-1.fp16.ckpt`), including multi-view generation, baking, inpainting, and final export.