Pivotal tuning is a specialized training technique that involves simultaneously training embeddings and networks, such as the UNet and text encoder, within the same framework. While this method is not widely used among casual LoRA trainers, it has been acknowledged in several publications [1,2,3] and has been successfully implemented in cloneofsimo’s LoRA repository. The benefits …
How I Use Pivotal Tuning to Make Better LoRAs in Stable Diffusion
