>>88101198>>88101979That's really cute.
>blonde girlShe looks familiar, but I can't place her, sorry
>vram heavy stuffYeah, looking forward to doing actual finetuning and high batch size loras. I have some ideas I wanna try.
>>88102325Uuuuuu
Speaking of vram heavy stuff: Image batches are processed in parallel, however it seems that they are processed by the same "part" of the gpu so to speak. Is there a way to force the gpu to load the model twice for inference, so true parallelism, or would I have to open comfy a second time? Does that even work? I know about highvram, but that just stops stuff from unloading.