HLL3 is training, 6 epochs done so far.
Also, i saw this interesting post
>>41209490 from Mogu-anon and tried extracting loras from different versions of hll models.
hll1-last, hll2-e18b - more epochs without --train_text_encoder
hll3-e6 - current version, still training
link -
https://mega.nz/folder/rBVGjIpQ#WCl9DL65mMRyaBmyIPA9WQSettings i used for extraction: --save_precision fp16 --dim 264
Results from using lora is noticeably worse than results from using model itself, but it works