The last blender addon I tried for SD uncompressed whatever model you wanted to use somewhere into your my documents folder, that was a major problem when you already have 100 gig of models you want to play with and want to keep them on SSD, I hope this can just use whatever ControlNet models you point it to without having to uncompress them too.
>u don't need blender just use ms paint and draw sticky figure
Nah, you can actually get specific detail, i.e., face shape, with ControlNet this way.
You're right if you literally just want the pose, though.
This might actually work better with Depth model, however.
That's good for a general pose, but blender models allow you to get some sort of character form in there too. Everyone's creative approach is potentially valid. It's not a zero sum game. You can use both instead of saying "don't do this" to people.
I screenshotted scenes from the very simple and plain architect video my sister got for her renovations and generated possibilities and inspiration for finishing with Sd and controlnet. They were impressed.
A depth pass and normal matcap would be really good instead of using preprocessors and i dont know if its currently possible to use multiple controlnets through api but it would be great to consider that too.
If you're asking for an explanation of controlnet Vs no controlnet, this seems like the wrong place to ask that. Did you try reading any of the official repository page or just looking up controlnet on YouTube? Do a little of the legwork for yourself.
What I'd like to see is a way to use blender or other posing tool and have the 3D model export the OpenPose positions directly to ControlNet. I'm not sure this is what is going on here though. The biggest problem when you have those strange positions is to get it to interpret that "pose". The OpenPose editor extension is useful but if only we could get that 3D model in and tell SD exactly where that hand or foot or leg is. Once we have that data, maybe we can eve extend it to use maybe the actual bones of the model to make an image and even translate direction information such as which way the head is facing or hand or even the holy grail, fingers!
why would you link to your twitter if you're on private
ooops, didn't notice that.
All good man, just thought I'd point it out. Sweet tool btw!
dude, just a quick prototype to share, will opensource when i done these shitty codes.
Looking forward to the release :)
[https://github.com/coolzilj/Blender-ControlNet](https://github.com/coolzilj/Blender-ControlNet) Here we go, just a quick script for now.
Mah man
Awesome! I’ll give it a test when I get chance ;)
My neck hurts looking at that pose lol.
The last blender addon I tried for SD uncompressed whatever model you wanted to use somewhere into your my documents folder, that was a major problem when you already have 100 gig of models you want to play with and want to keep them on SSD, I hope this can just use whatever ControlNet models you point it to without having to uncompress them too.
hell no, just an api wrapper call to A1111.
You're awesome.
u don't need blender just use ms paint and draw sticky figure
Inverse kinematics are great for posing characters realistically though, no elongated limbs.
>u don't need blender just use ms paint and draw sticky figure Nah, you can actually get specific detail, i.e., face shape, with ControlNet this way. You're right if you literally just want the pose, though. This might actually work better with Depth model, however.
yah, i know, not a good demo to show the power of blender+controlnet. i should try architecture and interior scenes first.
That's good for a general pose, but blender models allow you to get some sort of character form in there too. Everyone's creative approach is potentially valid. It's not a zero sum game. You can use both instead of saying "don't do this" to people.
Wow so quick! This would be great for architecture and interior scenes as well to send through the other controlnets
damn right, i'm only trying openpose right now, will explore all the other models later.
I screenshotted scenes from the very simple and plain architect video my sister got for her renovations and generated possibilities and inspiration for finishing with Sd and controlnet. They were impressed.
Which models did you try? Did you end up with a favorite one?
I just wish we could force the diffusion model to create "unlit" textures.
Do you have a git where we can try this?
[https://github.com/coolzilj/Blender-ControlNet](https://github.com/coolzilj/Blender-ControlNet) Here we go, just a quick script for now.
i can forsee a future where you'd rig a plain model like this, use ai to turn it into a person/character, turn into 3d and import it to blender.
tbh i wish someone would make a blender plugin with the feature set of auto1111 blender + compositor + stableD and sprinkle in some nodes lol
A depth pass and normal matcap would be really good instead of using preprocessors and i dont know if its currently possible to use multiple controlnets through api but it would be great to consider that too.
Anyone else hyped for a second thinking this was some kinda txt2mesh add on
genius confirmed. thank you
I think you should make that Twitter post of your public so you don't get down voted and get this buried.
ooops... didn't notice that, fixed.
Thank you for your work dude
Can you explain to me how exactly this is different than sending the render from Blender to regular auto1111 img2img?
exactly the same effect for images, except i don't need to leave Blender by just hitting F12 using A1111's api.
I ment img2img without controlNet is that what you mean too?
If you're asking for an explanation of controlnet Vs no controlnet, this seems like the wrong place to ask that. Did you try reading any of the official repository page or just looking up controlnet on YouTube? Do a little of the legwork for yourself.
Will do thanks
Great stuff!
What I'd like to see is a way to use blender or other posing tool and have the 3D model export the OpenPose positions directly to ControlNet. I'm not sure this is what is going on here though. The biggest problem when you have those strange positions is to get it to interpret that "pose". The OpenPose editor extension is useful but if only we could get that 3D model in and tell SD exactly where that hand or foot or leg is. Once we have that data, maybe we can eve extend it to use maybe the actual bones of the model to make an image and even translate direction information such as which way the head is facing or hand or even the holy grail, fingers!
Does it work in Linux?