>>83903591>>83906815>Does HLL-anon still visit this place?Yeah, I check these threads regularly
1. Why did you stick to using a lyco and not a full unet finetune?
It's smaller and uses less memory than finetuning. It made sense for sd1.5, because the size of lyco without conv was 1/3 of a full model. For SDXL it won't work that well i think.
>"true" finetuning is the only viable optionif you want to switch model to vpred. In your case you have no choice
> 2. Did you try freezing the text encoder?If you train the text encoder, model will learn the names of characters and artists much faster . You can stop training te after training 5-10 epochs. Also you can try training only te2 for sdxl, or to use higher lr for te1 and lower lr for te2. It works better than same lr for both.
> 3. Why you didn't really try baking for SDXLI'm not a big fan of pony(because of marine=aua bullshit). I trained some small loras, but i don't want to do anything big. It takes too much time
Random unsolicited advice:
- If you train with kohya scripts, check if you are using options like min-snr-gamma and ip-noise. It probably won't work correctly with vpred and will cause problems
- You'll save a lot of vram if you train in full bf16 with an optimizer that supports stochastic rounding or kahan summation.
Try StableAdamW, maybe. It's pretty good. Note that it needs slightly higher lr than usual
https://pytorch-optimizers.readthedocs.io/en/latest/optimizer/#pytorch_optimizer.StableAdamW