Our APIs support up to 20s at 25fps at the moment. Depending on hardware of course.
gemma_3_12B_it adapted by team Comfygemma-3-12b-it-Q4_K_S.gguf possibly not suitable?gemma_3_12B_it_fp8_e4m3fn - The fp8 converted text encoder from comfy, goes in CLIP folderltx-2-19b-dev-fp4_projections_only - Extracted projections from LTX-2 model to allow loading with DualClipLoader node, goes in CLIP folderltx-2-19b-dev-fp4_video_vae - The video vae, can be loaded with VaeLoader node, goes in VAE folderltx-2-19b-dev-fp4_vocoder - The vocoder model, not useful separately currentlyLightricks/ComfyUI-LTXVideodoes the model go in diffusion_models folder? no, in checkpoints folder
what folder does the spatial upscaler go in
models/latent_upscale_models
From LTX developers:
If you are using the custom node (from the LTXVIDEO repository) you must download the entire model folder from Hugging Face: HF:google/gemma-3-12b-it-qat-q4_0-unquantized after downloading, place the folder in your models directory and configure the ComfyUI node to point to:
gemma-3-12b-it-qat-q4_0-unquantized/model-00001-of-00005.safetensorsDo not delete or move the othermodel-0000X-of-00005.safetensorsfiles.
euler simple or ddpm 2 karras are safe combos
Using the full model with the distill lora at 0.6 makes skin far more natural than just using the distill model. Shame it’s so RAM heavy.