>>50610236It's really nothing special, some tips to achieve more detail in my experience is:
- Use a model which comes with lots of detail on its own (this one is breakDomain). Maybe do some shopping on CivitAI for models to merge together with hll3/4/S2
- Some prompts seem to be able to modulate detail, but not by a lot
- Use LoRAs which are designed to add detail (I know of 2 so far, both on CivitAI)
- Use Dynamic Thresholding with a high CFG (30-50). This will make some samplers impossible to use, in this case stick to DPM++ 2M Karras
- Use Latent in hires.fix. If arms/hands are severely borked, do the same seed with ESRGAN e.g. AnimeSharp at lower denoise and layer them in Photoshop to repair hands
- Upscale the image in the i2i tab. I currently use Ultimate SD Upscale but vanilla SD Upscale also works. The higher the upscale, the more detail
- Alternatively use MultiDiffusion/Tiled Diffusion with active Tiled VAE. I found Tiled Diffusion a bit more finicky, but returns better results if it works without corruption
- Upscaling with DDIM adds detail, the more steps and denoise the more detail. Of course, the image will also corrupt easier
- If you have the lifetime available, use LDSR, otherwise Valar and foolhardi_remacri are known to add good initial detail
- Keep Controlnet with Tile active to avoid corruption. Active Controlnet will slow down generation however. Controlnet+Tile only seems to work in Tiled Diffusion and Ultimate Upscale, not in vanilla Upscale. Play around with the settings of Controlnet, higher Controlnet weight reduces risk of corruption but also generated detail.
- Layer together different upscales in Photoshop to get rid of corruption. Usually I also adjust colors and exposition in the final pass
*with corruption I mean when suddenly faces/bodies etc pop up in places where they don't belong*