Alright that took way longer than I would have liked. Mostly due to the extension not having cancel or interrupts to resolve errors so I had to restart the webui about 3-4 times. Not to mention extension troubleshooting.
Note that the links are litterbox cause catbox is giving me error 500 server issue
First as reference, this is the gif2gif that I did with 0.3-0.4 denoising iirc.
https://litter.catbox.moe/bhvixh.webmThen we have temporalkit which takes multiple snapshots of frames every X number of frames in which you output into a batch folder and batch process it in img2img before feeding it back to Temporalbatch. The extension then uses Y number of overlapping frames to stitch the images together and blends the rest with reference to the input video.
These are the following outputs
Reference taken every X=2
https://litter.catbox.moe/t9ep05.webmReference taken every X=5
https://litter.catbox.moe/a0i8wn.webmReference taken every X=10
https://litter.catbox.moe/dxgin7.webmIMO the X=2 output captures the motion pretty well and the higher X becomes, the blending done by temporal net produces alot of noise as it stiches the images together. As I am trying to img2img change the character dancing in La+, there is alot of inconsistent items produced in the img2img step. But if you are simply doing a IRLvideo to anime style while maintaining the features of the main character. I think this would be pretty good. All in all. idk lmao you guys conclude
>>51560935Morning Feeshanon! Lovely festive Feesh!
>>51561419>Good luck!Thanks got some interesting results as you can see.
>>51563714Very nice cyberFeesh