>>55117536I mean Perfusion is supposed to be even better at only 100kb per concept. And from what I have read, Perfusion Models, which are seemingly the next big thing, are able to learn a concept from an extremely small dataset, and then transfer that concept not only to the vanilla model it was trained on but to fine tunes as well.
And even with the small dataset, it is able to creatively apply the concept without bringing in unnecessary elements from the dataset.
>Perfusion can combine learned concepts at inference time, creating scenes which portraymultiple concepts side-by-side, or even create interactions between them.
This is also huge. Because in theory, you could prompt two characters and have them BOTH appear side by side without controlnet.
https://research.nvidia.com/labs/par/Perfusion/