>>3025291Are you using ComfyUI or A1111/Forge/Reforge?
If ComfyUI your best bet is looking up a working workflow and tweaking it from there. The ComfyUI site has good documentation and example workflows. One I found was
https://docs.comfy.org/tutorials/controlnet/controlnet For A1111/Forge/Reforge it's easy, ControlNet just has a ton of configuration options so it can seem complex. All you really need is:
1. The input image for ControlNet to learn something from
2. The model for what it should learn
3. Default settings are usually fine but you can tweak them depending on what you want
A common use case for ControlNet is copying poses. I've found that the Canny setting with this model works well for me:
https://huggingface.co/2vXpSwA7/iroiro-lora/blob/main/test_controlnet2/CN-anytest_v4-marged_pn_dim256.safetensorsAgain, usually use default settings but you can switch it to "Prefer ControlNet" if it's not getting the pose as strongly as you want. I also run the ControlNet Pre-processor before genning and try to keep the resolution the same as the input image (A1111/Forge/Reforge have a button for copying the input image's resolution for the gen).
Hope that helps!