Kijai has adapted Wan Alpha “DoRA”: HF:Kijai/WanVideo_comfy:LoRAs/WanAlpha
Decoder needeed for WanAlpha: decoder.bin in the following locations (files have different hashes but same size..)
I did not know they originally had 1 file but split it for comfy to 2; ah it’s just the fgr (foreground?) and pha (alpha?) split into two files
Test workflow:
so if anyone want to use the loop for nodes, do not disable comfyui cache like i do, wasted 30mn figuring those nodes need the cache
Drozbay:
Loops are possible with the current execution flow but are still somewhat fragile and they don’t allow for starting/stopping partial executions. You can’t stop half way through a set of loops, change something for the next iteration, and continue. Indexing with lists is also not super reliable right now. Overall it’s often times easier and more stable to just lean into the practically infinite canvas and just make gigantic workflows. They are large but to me they are simpler to understand than having everything hidden in loops or layers of subgraphs.
GH:Azornes/Comfyui-Resolution-Master
Hmm.. what is NAGGuider from NAG?..
ComfyUI native now has NormalizeVideoLatentStart node which has been lifted out of Kandinsky-5 original implementation.
The node apparently homogenizes contrast and color balance inside the video.
mean/std normalization applied when using I2V

GH:WASasquatch/WAS_Extras contains among other useful nodes WASWanExposureStabilizer intended for a similar purpose
GH:AIWarper/ComfyUI-WarperNodes
kijai/ComfyUI-KJNodes contains Image Batch Extend With Overlap
which can be used to merge together original video with its extension done using I2V or VACE mask extension techniques.
Example of it being used in a LongCat wf: extend-with-overlap.
WanVideoBlender from GH:banodoco/steerable-motion is an alternative.
See also the next section on Trent Nodes
TrentHunter82/TrentNodes contains Cross Dissolve with Overlap node
as well as WanVace Keyframe Builder and other nodes for examining videos, taking last N frames, creating latent masks and VACE keyframing.
See also: Qwen Edit - VACE.
github.com/phazei/ComfyUI-HunyuanVideo-Foley
| HF Space | safetensors |
|---|---|
| ComfyUI-HunyuanVideo-Foley | hunyuanvideo_foley_xl |
| ComfyUI-HunyuanVideo-Foley | synchformer_state_dict_fp16 |
| ComfyUI-HunyuanVideo-Foley | vae_128d_48k_fp16 |
Inside of Comfy you could Use Stable Audio or ACE… but tbh both are not that good
Resize Image v2 from kijai/ComfyUI-WanVideoWrapper new mode is total_pixels copies what WanVideo Image Resize To Closest from kijai/ComfyUI-WanVideoWrapper does which is original Wan logicImage Batch Extend With Overlapfrom kijai/ComfyUI-KJNodes to compose extensions created with VACE extend techniquesVideo Info from Kosinkadink/ComfyUI-VideoHelperSuite + Preview Any to debug dimension errors in ComfyUI etcCkinpdx a passionate AI artist has shared GH:ckinpdx/ComfyUI-WanKeyframeBuilder repository.
which provides Wan Keyframe Builder (Continuation) node.
This node was originally intended to prepare images and masks for VACE workflows.
When SVI 2.0 was released the node was updated to facilitate workflows combining VACE keyframing, extensions and SVI references.
The node has two distinct modes of operation: when images output is used and when svi_reference_only output is used.
The modes are toggled by a boolean switch on the node.
Sample wf.

Use this node to split audio between generation runs which produce various parts of the video with HuMo.
Use Trim Audio Duration as shown to remove duplicate part of audio before re-assembling the video.
Assemble separate images into a sequence

Disasseble equence into separate images

from GH:PGCRT/CRT-Nodes:

UniLumos is an AI model for relighting a video. Workflows:
SuperPrompt node from kijai/ComfyUI-KJNodes.Merge Images node from VideoHelperSuite (so called VHS)Moved here.
vitpose can do animals as well as humans.
dwpose