
Video to Video with Hunyuan

Video to Video with Hunyuan
Change a video into a slightly or very different video while staying true to the original movement and composition of the original.
Video2Video_Hunyuan.json
Key Inputs
Load Video: Use any Mp4 that you would like to create from
Frame load cap: how many desired frames in the output video (for Hunyuan, it must be a multiple of 4, plus 1. for example, 65, or 101)
Skip first frames: Start the AI process somewhere in further into the video's timeline
Select every nth frame: Only process every other Nth frame in the generated video
Width & height: In Pixels
Prompt: as descriptive a prompt as possible
Guidance Scale: Higher numbers follow the prompt more strictly
Flow Shift: For temporal consistency, adjust to tweak video smoothness.
Denoise Strength: The amount of variance in the new image. Higher has more variance.
File Format: H.264 and more
Output Path
../outputs/Video2Video_Hunyuan/

Examples
Verified to work on ThinkDiffusion Build: Mar 11, 2025
Why do we specify the build date? ComfyUI and custom node versions that are part of this workflow that are updated after this date may change the behavior or outputs of the workflow.