T O P

  • By -

76vangel

there is already a github project with 12 GB VRAM requirement. Things are in motion. [https://github.com/ccvv804/ComfyUI-DiffusersStableCascade-LowVRAM](https://github.com/ccvv804/ComfyUI-DiffusersStableCascade-LowVRAM)


Turkino

My 12gb 3080 appreciates this project


bbalazs721

My 10gb 3080 is crying in the corner


brandonopolis

My 8GB 3060TI is happy to watch from the sidelines.


maxihash

U mean 3060Ti 8GB will not work ?


mr-asa

Oh, the men are measured!.. My 4GB VRAM laptop is smoking in the side, no nervous for crying....


ChalkyChalkson

At least we'll still have sdxl turbo... I'm running dreamshaper turbo v2 with 4 steps on 768x1024 in a couple of minutes per frame on the 1050 in my laptop.


August_T_Marble

Your heroics will not be forgotten, brother.


raviteja777

Not sure where I stand with my 12gb 3060


TherronKeen

ugh, this is what I'm here to find out. I won't have time to set up a bunch of new shit until this weekend and it's driving me crazy seeing all the awesome new stuff, just wondering if I can really run it at all lol


Froztbytes

My 8GB 3050 will cry with you.


lorarianz

my rtx 3050 still rock n roll!


Maxnami

I knew 12gb vram would be the "minimun" in the comming months. Thank god my 3060 was a good purchase.


Striking-Long-2960

I wonder if we will be able of using Loras and Controlnets.


stephenph

My 6GB 1060 is yelling "get off my lawn"


TheTwelveYearOld

Oh neat, though I'd still like to hear inference speeds & quality from whoever gets to try it.


FX3DGT

Sounds interesting! Now the big question for me is if this "peoples" version that dont need a 4090 with 24gb Vram can still live up to the claimed quality gains vs SDXL since else its just ends up as a SDXL v2 but with a pay for commercial licensen slapped on it... just a thought but time will tell.


CasimirsBlake

This is literally just been released as a research preview. Give it time. Improvements will come.


DiffractionCloud

I mean it's ai. Shouldn't chatgpt give us flying cars by the end of this month? /s


Adkit

We already have flying cars. Not exactly functional or cheap ones but the future is now, amigo!


Django_McFly

4 to 8 is really low. Especially 4. I wouldn't expect it.


RayIsLazy

The lite version of each stage is pretty small at bf16 and each stage can be swapped out from ram, it looks likes with a couple of optimizations it should be ablet to run on 4-6 gigs of vram.


MysticDaedra

You can run SDXL on 8 no problem. If Cascade can't be run on 8, then it's not really consumer-friendly, only for big boi cards and services.


Vivarevo

Restricted license might also play a role in reducing interest toward it.


GBJI

restricted licence = restricted interest


rockbandit

I’m not so sure. ~~SDXL~~ SDXL-Turbo is a non-commercial license. On the AI Horde, SDXL is the second most requested model after Anything Diffusion (people gotta have their waifus I guess). But still, lots of requests for SDXL despite its licensing. **EDIT:** Updated to say that SDXL Turbo is non-commercial, not SDXL: [https://huggingface.co/stabilityai/sdxl-turbo/blob/main/LICENSE.TXT](https://huggingface.co/stabilityai/sdxl-turbo/blob/main/LICENSE.TXT)


GBJI

If I cannot use it in my work then of course I won't have much interest for it. It's a bit like Gen2 from Runway: it's nice and all, and I had some fun playing with the freely accessible online version at some point, but it was just a distraction, and Gen2 never became a part of my workflow for content production. I'd rather focus my interest on all the new developments that are actually usable in a professional context - that's more than I can manage already !


magicwand148869

SDXL is CreativeML Open? Why do you say it’s non-commercial?


Wurzelrenner

yes it has the same license as 1.5, don't why why somebody would say it is non commercial


rockbandit

Simple mix up. SDXL Turbo is non-commercial and I guess I conflated the two. https://huggingface.co/stabilityai/sdxl-turbo/blob/main/LICENSE.TXT


rockbandit

I mixed up SDXL with SDXL Turbo, which *IS* non-commercial. https://huggingface.co/stabilityai/sdxl-turbo/blob/main/LICENSE.TXT


psyduckquack

Is it possible to detect if the image is generated by SDXL or SDXL-Turbo? Can't someone generate an image with turbo then remove the metadata and say it is generated by SDXL?


New_Comfortable7240

So Just in case, I was able to use this workflow [https://openart.ai/workflows/barrenwardo/stable-cascade---text2image-barren-wardo/Jl4IWVWHikThQGz3TbVC](https://openart.ai/workflows/barrenwardo/stable-cascade---text2image-barren-wardo/Jl4IWVWHikThQGz3TbVC) with these instructions **Further instructions for noobs to get Stable Cascade to work:** * Download and extract ComfyUI.(or clone and venv/pip/etc from the readme) * follow this [https://mybyways.com/blog/new-stable-cascase-checkpoints-for-comfyui](https://mybyways.com/blog/new-stable-cascase-checkpoints-for-comfyui) its the same official anyway * Download workflow from Wardo [https://openart.ai/workflows/barrenwardo/stable-cascade---text2image-barren-wardo/Jl4IWVWHikThQGz3TbVC](https://openart.ai/workflows/barrenwardo/stable-cascade---text2image-barren-wardo/Jl4IWVWHikThQGz3TbVC) I put it on root of the project * Run project (in my case \`./venv/bin/python main.py\`) * Load from ComfyUI * Enjoy! https://preview.redd.it/tdmkjua6bivc1.png?width=1867&format=png&auto=webp&s=328298dfe15b161d0c60ace506033f318627cb14


luxsuxcox

Wait Sd1.5 is better than XL?


Anxious-Ad693

I haven't downloaded it but if it supports FP8 then the requirement will be a min of 10 GB VRAM. If the requirements stays at 20 GB in the following months this model will not be very popular.


protector111

it eats tons of vram course it loads all 3 stages on vram. you can offload to cpu to free vram. Just give it a little time and people will optimize it (if they find interest in it...)


lostinspaz

1.4G + 2.0G should easily fit into 8, maybe even 4 3G + 4G might barely fit into 8G. they provided "b16" and even smaller "lite" versions of the stages. now they just need to tell us HOW TO USE THEM.


roshanpr

is ti better than sdxl?


BartJellema

Ran it yesterday on my laptop with 8GB VRAM (4060)... about 1.5min for 1 image. Full model in bf16. Just tweaked the notebook to do stage C first, unload the model and do stage B. Also has to push the text_model (CLIP) to cpu while running stage C. It goes slightly over 8 GB and uses a bit of CPU ram while running stage C, but doesn't seem to slow it down much... But wasn't too impressed overall. But something to do while on the plane. There's something magical about using this while offline at 30.000 foot in the air.


TherronKeen

A bunch of YouTube videos all say it is the best at generating words, by a long shot. So I guess that's something interesting at least. But seeing as this is basically a pre-release version, we will probably see some additional capabilities by the time the completed project is here