>>72749747>Picrel: Me when I read about AI model quantumshttps://files.catbox.moe/j8v6ao.pngI already made a successful merge with character knowledge and style transfer. My worry was not really style but character knowledge and lora compatibility because the thing I hated about the whole ani/pony debacle was that loras trained on one did not work on the other.
IN04 and IN05 were affecting the merges the most.
>>72722169 OUT01,02,03 are related to character and style knowledge,
OUT7(?) or OUT8 had an opposite response as IN04 and IN05 had on pony, if they were leaning too much towards animagine instead of just outputting noise, tit started adding a bunch of shit in the image. And OUT6 or OUT7 were affecting the thickness of the lineart/contrast.
And OUT5 was porn related i think.
I don't have an X/Y plot but I have a bunch of images from whenever I was testing earlier but they are not tagged appropriately.
So from the looks of it, the inner transformer layers had more effect on the overall image and composition than the outer ones, except for Base and Out, of course.
I think this is the XL list of blocks and how many layers each one has based on my info output from a model:
https://files.catbox.moe/tp0u1d.txtI will probably upload the model in a bit but it has 1 big stinky issue. From my limited testing it sucks at 'foot focus' sometimes.
>>72743970https://files.catbox.moe/3avz13.png