Quoted By:
This lora only works properly at most 5% of the time but whenever it does it's so satisfying. I wonder if I can improve it if I I throw all the failed images into the regularization folder and train another one. Anyone here ever used regularization images while training before?