>>77215195Oh maybe I wrote that badly, it's not training method per se. Just each model is being finetuned differently so they end up not being compatible with each other. For example picrel is Jelly with animagine LoRA used on a Kohaku finetune known as the "Heart of Apple". It doesn't really work. If the Cascade finetune going on right now ends up being good it will be like starting from 0 again with SDXL, weird time we live in right now.