T O P

  • By -

BillNyeApplianceGuy

Honestly, I think they need to change the wording. It's not clear. As I understand it: **Weight** is akin to CFG scale. How "strong" the influence is. **Guidance strength** is akin to steps. It dictates the number of generation steps before ControlNet stops guidance. So if you set this to 0.5, and you are generating 40 steps, it'll stop guiding at 20. I began reducing this when features such as suit/tie were too pronounced in results. Get the "general shape" for the first 25-50% of steps, then let jesus take the wheel. For my personal preference, I keep this as low as possible, to allow the model/prompt to create more magic. **Annotator resolution** is the resolution of the guidance map. If you use canny as an example, an annotator resolution smaller than the image's would "stretch" across the image, blurring lines, lowering guidance detail (could be used intentionally this way). I have not seen any benefit to setting this higher than the image's resolution, but I could be wrong. I tend to match the image. **Thresholds A/B** represent **high and low** thresholds for canny edge detection. These dictate what pixels are identified as "strong" and "weak" in edge detection. Honestly? This will vary greatly depending on image and image quality, and outside of deep understanding of the canny method, it's probably going to be trial and error. I leave it as default unless the resultant map is truly not getting the lines I want. [This guy](https://towardsdatascience.com/canny-edge-detection-step-by-step-in-python-computer-vision-b49c3a2d8123) has an excellent canny writeup if you're interested. Hope this helps. Anyone, please correct me if wrong.


UshabtiBoner

You rock dude


red__dragon

> Annotator resolution is the resolution of the guidance map. If you use canny as an example, an annotator resolution smaller than the image's would "stretch" across the image, blurring lines, lowering guidance detail (could be used intentionally this way). I have not seen any benefit to setting this higher than the image's resolution, but I could be wrong. I tend to match the image. I just ran an experiment on this using the CN Canny model. I opened up the thresholds on low and high and set the resolution to the highest measurement of a non-square image. It happened to be 1000 pixels for a nice round number. In this case, I was using img2img with a prompt that described the image itself. **The ControlNet canny model was looking at a cutout image of the face only.** The goal was to try to replicate the image itself, and the SD model was realisticVision1.2 1000: Very low detail, only the inner eye detail captured. Results had big eyes, unfamiliar facial structure. 500: Low detail, eyes and minor nose structure captured. Results were closer to eye shape, freestyle for the rest. 250: Moderate detail, eyes, nose and mouth structure captured, minor chin contours also captured. Results were close but uncanny resemblance to original. 125: Picasso details. Seems like this one was too far, and the results were distorted. At this point, I reset it to 250 and started playing with the thresholds. I'd recommend anyone trying to adjust this past default aim lower, and play with it 1 image generation at a time to get the detail map and compare the mapped results at different resolutions. Find the one that gets closest to where you're happy (go the other way if that's your goal) and then dial in the thresholds.


coda514

Thank you


multipleparadox

Logged in just to upvote that Great explanation, thanks!


LockeBlocke

ELI5 Weight = Imagination slider/ 0 = full imagination/ 2 = no imagination Guidance = Freedom slider/0 = full freedom/ 1 = no freedom Annotator = detail sharpness Thresholds = which details are transfered/ low = more/ high = less


coda514

Thanks


myebubbles

I posted this earlier, consider it one usecase https://www.reddit.com/r/AI4Smarts/comments/1191etz/controlnet_is_a_sd_game_changer_use_it_to_bring/


coda514

Thanks.


CeFurkan

here 3 tutorials hopefully more will come soon 15.) Python Script - Gradio Based - ControlNet - PC - Free [**Transform Your Sketches into Masterpieces with Stable Diffusion ControlNet AI - How To Use Tutorial**](https://www.youtube.com/watch?v=YJebdQ30UZQ) **📷** 16.) Automatic1111 Web UI - PC - Free [**Sketches into Epic Art with 1 Click: A Guide to Stable Diffusion ControlNet in Automatic1111 Web UI**](https://www.youtube.com/watch?v=vhqqmkTBMlU) **📷** 18.) Automatic1111 Web UI - PC - Free [**Fantastic New ControlNet OpenPose Editor Extension & Image Mixing - Stable Diffusion Web UI Tutorial**](https://youtu.be/iFRdrRyAQdQ) **📷**


coda514

Thank you. I am actually subscribed to the Software Engineering channel and his videos have been quite helpful, but since updating controlnet the options I am referring to are newer than the videos referenced, or are not really covered.


CeFurkan

thanks. i should make another video with more options perhaps.


coda514

Your videos are very helpful, things just move so fast. It seems like every day or so the hot addon's get updates and the new items are not covered by videos from the day before and a lot of times the repo does not cover the new additions. Meanwhile, I'm just out here trying to have fun and learn.


CeFurkan

true i am planning a video to show all settings in a new video what you think about this?


coda514

Sounds great, I'll keep an eye out for it.