>>88329117I did bake an Aradia Lora yesterday, but finished the Ravencroft Dataset while doing so and decided to bake the whole family over night.
Yeah, this is much better than Aradia, and feels like it might even be better than the mamarissa one as well. In terms of not being pose constricted definitely. Pretty nice to see that this approach works so well. Also cuts down on training time opposed to multiple ones and makes sorting the data easier.
I have three theories as to why it works so much better:
1: Larger dataset (400 images, including some nerissa and other chuuba group images with her to not fry her). Including other characters and multiple character images where it knows both characters might be allowing it to learn more angles and generalize better. I assume it's this, cause I think this is also what made Illustrious so good at character comprehension.
2: Higher batch count. I have read that higher batches supposedly make Loras worse, but I feel like it might only slow down learning, after both the Justice and now Ravencroft Lora still improved until step 4000 without frying.
1 and 2 would also probably amplify each other, with higher image counts allowing batches to be made up of more combinations of images.
3: Higher dims. Not really sure, since I'm not too well versed in the actual tech behind it(I still have those microsoft papers on general ML stuff I wanna read), but I'd assume higher dims come closer to a full finetune than lower dims. Maybe someone with more knowledge could tell me how accurate this analogy is: Low rank loras are like cookie molds, while higher ranks and finetunes teach the model how to do it by hand. First is quick and gets good results, but the second allows for more control of the end result.
Anyway, here Lora (it's as chonky as the justice one):
https://gofile.io/d/agqzFT (litterbox errored out)
Picrel comparison and some other seeds and model comparisons:
https://litter.catbox.moe/24rmsx.pnghttps://litter.catbox.moe/npau07.pnghttps://litter.catbox.moe/ms0o21.pnghttps://litter.catbox.moe/2v912v.pngDemons trying to get seed:
https://litter.catbox.moe/59iuys.pngIt didn't learn Paparissa or Malphis, which might be because they don't even have a quarter of the images that the woman have, and most of those are group pictures, so don't really count anyway.
PS: Someone on civit was joking that I'm doing the next hll, lora by lora. Little do all of you know I'm still just in the testing and datacollection phase for the eventual finetune.
>>88362595>Is the NAI killer finally here?In terms of local being ahead for a while, I'd say yes. NAI still has its place for people without gpus, director tools are really good from what I've seen and it still has a nicer baseline, but Noob is now close enough in quality and has the advantage of better multicharacter prompts and access to all local tooling, that I'd actually consider it superior for most usecases. Of course, I did say the same thing about 4th tail, and that became a meme on /h/ (rightfully so, I was just blinded by it being better than Pony at stuff I wanted.). It's a bit of an unfair comparison though, as NAI is over half a year old at this point.
v4 Will either blow local out of the water again, or be doa. No real inbetween atp.
>>88364176That's a really nice style. Could I ask for the artists/loras?
>>88368608Cute!
>>88368802If Nvidia, I think batchsize 1 takes like 8gigs, probably possible to cache even more and use smaller optimizers and save another 2. If AMD, anything under 16gigs will probably oom with most optimizers, considering my 6800xt oomed sometimes. If 8-bit optimizers run nowadays, those might make it possible on 12.
>>88376301Cute goobs!