>>60700161OFT is mostly about less overfitting and better preservation of base model. And it supposedly trains faster too.
Pivotal tuning is training embeddings and model at the same time. Some anon was talking about it few days ago.
Sounds stupid, but there's examples of it working:
https://civitai.com/articles/2494/making-better-loras-with-pivotal-tuningHLL with this method will have about 800 embeddings packed. Not sure if it will work at this scale, but it would be interesting to try.