>>103397694>>103414961KEK
>>103419557Thicc YabaIRyS
>>103417505Evening, cute kiss comics
>38gbYeah I think this is biggest image model we have in open source though most can't run the full one so you want to use the GGUFs which is around 12-14gb and another 4-5gb for the text encoder (it uses an llm as the TE). The guy who made the quick wan LoRAs also made one for this model so overall I heard you can cut down vram use to about 6gb and 60-90 sec per gen. Yeah all in comfy unfortunately, I hate to say it honestly (since I hate comfy too) but I think forge is probably dead for future models.