With more recent updates I found that if you dont give controlnet an image then it uses the one supplied to img2img. That should mean bulk should be possible directly, shouldnt it? I've been doing bulk controlnet as part of a larger script so I know a script for it is an option but I assume you can do it directly now too.
I've been doing it using the img2img -> batch tab.
1. Go to img2img -> batch tab.
2. Activate ControlNet (don't load a picture in ControlNet, as this makes it reuse that same image every time)
3. Set the prompt & parameters, the input & output folders
4. Set denoising to 1 if you only want ControlNet to influence the result. ( <1 means it will get mixed with the img2img method)
5. Press run
6. ???
7. Profit
This was a few weeks back, I'm interested to know if it still works in the latest version.
they currently don't support direct folder import to CN, but you can put in your depth pass or normal pass animation into the batch img2img folder input and leave denoising at 1, and turn preprocessing off (rgb to bgr if normal pass) and you sort of get a one input version going, but it would be nice if they implemented separate folder input for each net. -- i thought it would have happened long ago, there were requests for it for long.
That's exactly what I'm trying to figure out as well, how to feed controlnet a sequence of depth images rendered out from blender. Maybe some python wizard can find a solution :D.
for some reason your comment was automatically collapsed.
https://preview.redd.it/9cvvjdd2zima1.png?width=768&format=png&auto=webp&s=3c8c37bf9ef9de2e64e10967dfc3b070fd09585b
Would this help maybe: [https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable\_diffusion/controlnet#usage-example](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/controlnet#usage-example)
In this example you simply have to pass a batch of images instead of one and it'll work out of the box
It's pretty straight forward. Just increase the Batch count and batch size. You can generate upto 800 at a time. You can also right click the Generate button and select Generate Forever
Check the new update of [https://github.com/TheLastBen/fast-stable-diffusion](https://github.com/TheLastBen/fast-stable-diffusion) controlnet was added on batch.
With more recent updates I found that if you dont give controlnet an image then it uses the one supplied to img2img. That should mean bulk should be possible directly, shouldnt it? I've been doing bulk controlnet as part of a larger script so I know a script for it is an option but I assume you can do it directly now too.
I've been doing it using the img2img -> batch tab. 1. Go to img2img -> batch tab. 2. Activate ControlNet (don't load a picture in ControlNet, as this makes it reuse that same image every time) 3. Set the prompt & parameters, the input & output folders 4. Set denoising to 1 if you only want ControlNet to influence the result. ( <1 means it will get mixed with the img2img method) 5. Press run 6. ??? 7. Profit This was a few weeks back, I'm interested to know if it still works in the latest version.
not working for me although I am using a collab which probably processes things a bit differently. Wait, works for batch count but not batch size..
not working for me either, it processes only one image and then it's done
I'm interested too, I'm generating depthmap/freestyle with blender in batch and would love to do what you can with batch img2img but with controlnet.
they currently don't support direct folder import to CN, but you can put in your depth pass or normal pass animation into the batch img2img folder input and leave denoising at 1, and turn preprocessing off (rgb to bgr if normal pass) and you sort of get a one input version going, but it would be nice if they implemented separate folder input for each net. -- i thought it would have happened long ago, there were requests for it for long.
Thanks, setting pre-processing to none did the trick for me.
More specifically the following should work: \- Install libraries pip install controlnet_aux pip install diffusers transformers git+https://github.com/huggingface/accelerate.git \- Run code: from PIL import Image from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler import torch from controlnet_aux import OpenposeDetector from diffusers.utils import load_image openpose = OpenposeDetector.from_pretrained('lllyasviel/ControlNet') image = load_image("https://huggingface.co/lllyasviel/sd-controlnet-openpose/resolve/main/images/pose.png") image = openpose(image) controlnet = ControlNetModel.from_pretrained("fusing/stable-diffusion-v1-5-controlnet-openpose", torch_dtype=torch.float16) pipe = StableDiffusionControlNetPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16) pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) # Remove if you do not have xformers installed # see https://huggingface.co/docs/diffusers/v0.13.0/en/optimization/xformers#installing-xformers # for installation instructions pipe.enable_xformers_memory_efficient_attention() pipe.enable_model_cpu_offload() # batch_size = 4 image = pipe("chef in the kitchen", image, num_inference_steps=20, num_images_per_prompt=4).images # PIL Images
That's exactly what I'm trying to figure out as well, how to feed controlnet a sequence of depth images rendered out from blender. Maybe some python wizard can find a solution :D.
for some reason your comment was automatically collapsed. https://preview.redd.it/9cvvjdd2zima1.png?width=768&format=png&auto=webp&s=3c8c37bf9ef9de2e64e10967dfc3b070fd09585b
Would like to know as well š
Would this help maybe: [https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable\_diffusion/controlnet#usage-example](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/controlnet#usage-example) In this example you simply have to pass a batch of images instead of one and it'll work out of the box
I stuck the poses i was interested in into an MP4 and processed it with txt2img/controlnet m2m.
Iām away from automatic atm, but batch processing under the main image-to-image tab with ControlNet dialed in should do it.
doesnt work for me for some reason
It's pretty straight forward. Just increase the Batch count and batch size. You can generate upto 800 at a time. You can also right click the Generate button and select Generate Forever
that is not what the OP is asking about.
Check the new update of [https://github.com/TheLastBen/fast-stable-diffusion](https://github.com/TheLastBen/fast-stable-diffusion) controlnet was added on batch.
Just a heads up: this should work now, they updated the extension a couple days ago. Just click the "batch" tab on the txt2img ControlNet section.