>>82546757>What the hell is the point of the model if its going to regurgitate AI gens into itself?At least with AOM, if properly tagged, you could use it in negatives to get flatter images
The part about forgetting previous styles is due to how models are being trained. When a lora is trained and is used on top of a model, or when a model is finetuned, it is not an adding onto the existing knowledge and it does not add new neurons or parameters, it changes the existing ones.
Maybe with Liquid Neural Networks we can have something that works this way but they are far in the future if they will even work for imagegen (tried to train one myself these days but I'm definitely screwing something up as I cannot recover my own image from the latent space and get only smoke vaguely shaped like the input image instead
https://files.catbox.moe/pcbq0j.png )
I did comment on your posts. See
>>82483723Hey as long as you get what you want, I'm happy. Some other models might do it better. LS(D) anon makes some banger merges and tunes. Never give up testing if time allows it.