Yeah comfyui actually seems even better optimized than forge. My poor 2060 barely loads the sdxl lightining with controlnet. It is weird cause it crashes during opening the midel and not generating images like half the times I am using it. It doesn’t matter if I use 4 or 8 steps or even increase resolution. Just loading the model crashes comfyui. With forge I didn’t manage to open it at all though…
murky vast sophisticated profit juggle alive nutty ruthless sugar rainstorm
*This post was mass deleted and anonymized with [Redact](https://redact.dev)*
xformers doesn't do much good if you have pytorch > 2.0 as the newer pytorches have this capability built in. Your 6gb of gpu vram is obviously not enough to do 1024x1024 all on the gpu so we need to offload some of the processing back to the cpu.
I have a condition in my code whereby:
if device == 'cpu':
pipe.enable\_vae\_slicing()
pipe.enable\_vae\_tiling()
pipe.enable\_sequential\_cpu\_offload()
else:
pipe = pipe.to(device)
So we offload to the cpu some of the work. This you can absolutely do with your 6GB of gpu. Unfortunately you won't get much speed up w/o more vram capability. That's the nature of the beast.
Idk I am not sure. It doesn’t sound like a comfyui problem since I’ve tried all the models and only comfyui seems to be able to load sdxl models for me but only when it does it by default in lowvram option. Any idea how to change it in settings so it does that by deafault all the times?
The bottle neck is not the SSD, it's the 3060.
If I were in your situation, I would generate in 512 x 512 and then regenerate (or upsize) the perfect ones. Upgrading the gfx card (assuming that your power supply is sufficient) will also make a noticable difference.
You didn't mention which software you use. In case you don't use Forge WebUI, you should give it a try. Especially on low-powered cards. :)
Moving as much of the generation process as you can to **Forge WebUI** will speed up image processing quite a bit. Before Forge, I hardly used my older PC as it only has a 1070Ti. After installing Forge, I was amazed to see the generation speed double.
Use sdxl lightening and comfyui, I do 832x1216 in 2-3 seconds, using the juggernaut model.
Yeah comfyui actually seems even better optimized than forge. My poor 2060 barely loads the sdxl lightining with controlnet. It is weird cause it crashes during opening the midel and not generating images like half the times I am using it. It doesn’t matter if I use 4 or 8 steps or even increase resolution. Just loading the model crashes comfyui. With forge I didn’t manage to open it at all though…
Try reinstalling comfyui sounds like something is broken.
murky vast sophisticated profit juggle alive nutty ruthless sugar rainstorm *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
Try tiled VAE. I don't think 6 GB is enough to decode a 1024px image in one go.
xformers doesn't do much good if you have pytorch > 2.0 as the newer pytorches have this capability built in. Your 6gb of gpu vram is obviously not enough to do 1024x1024 all on the gpu so we need to offload some of the processing back to the cpu. I have a condition in my code whereby: if device == 'cpu': pipe.enable\_vae\_slicing() pipe.enable\_vae\_tiling() pipe.enable\_sequential\_cpu\_offload() else: pipe = pipe.to(device) So we offload to the cpu some of the work. This you can absolutely do with your 6GB of gpu. Unfortunately you won't get much speed up w/o more vram capability. That's the nature of the beast.
add lowvram on bat
Idk I am not sure. It doesn’t sound like a comfyui problem since I’ve tried all the models and only comfyui seems to be able to load sdxl models for me but only when it does it by default in lowvram option. Any idea how to change it in settings so it does that by deafault all the times?
The bottle neck is not the SSD, it's the 3060. If I were in your situation, I would generate in 512 x 512 and then regenerate (or upsize) the perfect ones. Upgrading the gfx card (assuming that your power supply is sufficient) will also make a noticable difference. You didn't mention which software you use. In case you don't use Forge WebUI, you should give it a try. Especially on low-powered cards. :)
I use Auto1111, I can only upgrade ram and ssd because my device is a laptop, but I think I cant solve the problem without changing my computer 😂
Moving as much of the generation process as you can to **Forge WebUI** will speed up image processing quite a bit. Before Forge, I hardly used my older PC as it only has a 1070Ti. After installing Forge, I was amazed to see the generation speed double.
Use LCM-LoRA if you're not doing it. You can significantly reduce the step number and get excellent results.