wanx-troopers

Lora Alchemy

LoRa-s Table Of Contents

Trivia

LoRAs often work with models they were not designed for.

People sometimes experience a placebo effect - using LoRAs that neither enhance nor hinder good results.

If several LoRa-s are used together the order of application does not matter.

Some LoRa-s Have To Be Loaded By Model Loader

Re unianimate LoRa: “you have to load the unianimate lora on the model loader, not the set node because it patches the model too”

Kijai on Merging vs Unmerged LoRAs

Normal ComfyUI behaviour with loras is to merge the weights to the model before inference, which can’t be done to GGUF models as it would be too slow operation, so instead the lora weights are just added to the dequantized GGUF weight when it’s used

merging loras:

unmerged loras (GGUF or the option in the wrapper):

Q: Is it possible to merge a lora into a GGUF model ?
A: Would need to do that to bf or fp16 model and then conver that to GGUF

Some Interesting Words

Lora = low-rank matrix factorisation
Loha = hadamard product matrix factorisation
Lokr = kronecker product matrix decomposition
In bongmath language

Special Use

Wan 2.1 and 2.2 LoRA-s: