>>87422489As this is only my second inference to test, I don't have an good answer, this is only 13 videos but the way we train on low vram like a 4090 means the optimizations have already been happening. I could perhaps run smaller frame videos but that remains to be scene. furthermore the repo for cogvideox seems to mention looking into possibly giving us fp8 training, which would be a huge boon to finetuning.
though I don't think this is the kinda loss I want to see but
>results2/cogvideox-lora/pytorch_lora_weights.safetensorsits time