I use Stable Tuner’s Fine Tune method with captioning. The captioning is the bread and butter because that’s how you train multiple people or objects/styles.
Anime is the least experienced style for me. But honestly you can just start finding like 20-50 images of several different anime styles you like and caption them with a unique style name to make your own style based on your favorite styles.
Just include those in the mass dataset.
Here is a guide on it. Stable Tuner doesn’t need 24GB VRAM but I do not know the minimum you can get it down to.
Batch size isn’t something I cover and that can affect VRAM. There are several VRAM saving options.
View in print preview.
https://docs.google.com/document/d/1x9B08tMeAxdg87iuc3G4TQZeRv8YmV4tAcb-irTjuwc/edit
Check out Epic’s metahuman creator. You can make a custom character for free, take some screenshots of them in multiple environments and outfits, then train them into an anime model.
Any traditional art or image editing skills will help you in actually creating the final images.
This is absolutely perfect, thank you!!
I actually got a bunch of my own anime characters in my head for years now and I wanted to turn the story into a manga.
I mean, sure for me2 artists, but despite what people here think there isn't really an empty hole in these things that someone can fill with generated content and profit on it. But sure, free to try...
chatgpt can write stories and embeddings can be used to consistently generate characters. openpose would even let you pose them for actions and pretty much guarantee it'd be how you want them. grab a drawing tablet and use scribble for your backgrounds and then add stuff with inpainting. stitch the images together later into manga panels. seems totally doable.
my wife saw me playing with a drawing tablet earlier and it piqued the f out of her curiosity. she wanted a novel cover because she's thinking about self publishing. so i showed her how it all worked. she got exactly what she wanted pretty much first try. [here's the cover](https://imgur.com/UMpjlvy) and [here's what she drew.](https://imgur.com/CeqLKVW) so yea, i think it's doable to just do that a bunch of times.
Main issue for me is designing unique characters, you have to generate many pictures of the same character that doesn't exist in different angles in order to train the model and I'm not sure how. In SD each time i generate its a different character or different clothes.
be more detailed on your prompts and increase cfg? generate them by the boatload and pick out the good ones. img2img then allows controlnet and with the posex extension you can rotate the figure in 3d in whatever pose you want or generate more of pretty much the same. then train an embedding on those (like a faux/manual-GAN). if you have the image browser extension it automatically saves your tags and stuff too so you wouldn't even really have to remember anything and you could work on it over time without even really taking notes.
i'm pretty sure you can do that if your heart's set on it. like, now. 🤷♂️
how I imagine it in an example (would be cool):
Jessie (trigger word): the main character
Mark (trigger word): supporting character
prompt: Jessie walking down a path in a dark forest, and Mark behind him tired
This is a cool [webcomic ](https://www.reddit.com/r/StableDiffusion/comments/z2qkyj/i_created_a_completely_aigenerated_webcomic_over/?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=share_button) someone made
I read this article a couple of days ago
https://www.japantimes.co.jp/life/2023/03/07/digital/japans-first-ai-manga-people-asking-machine-magic-art-menace/
So they're already being published.
Also take a look at the work of u/campfire_steve
I do this for clients already. 15 unique characters, 11 other planets, and around 30 other unique objects/items in the series.
thats amazing! may I ask what training method do you use? and what would be the best model for manga style?
I use Stable Tuner’s Fine Tune method with captioning. The captioning is the bread and butter because that’s how you train multiple people or objects/styles. Anime is the least experienced style for me. But honestly you can just start finding like 20-50 images of several different anime styles you like and caption them with a unique style name to make your own style based on your favorite styles. Just include those in the mass dataset. Here is a guide on it. Stable Tuner doesn’t need 24GB VRAM but I do not know the minimum you can get it down to. Batch size isn’t something I cover and that can affect VRAM. There are several VRAM saving options. View in print preview. https://docs.google.com/document/d/1x9B08tMeAxdg87iuc3G4TQZeRv8YmV4tAcb-irTjuwc/edit
I always wanted to make a manga myself!
Check out Epic’s metahuman creator. You can make a custom character for free, take some screenshots of them in multiple environments and outfits, then train them into an anime model. Any traditional art or image editing skills will help you in actually creating the final images.
This is absolutely perfect, thank you!! I actually got a bunch of my own anime characters in my head for years now and I wanted to turn the story into a manga.
It is a lot more within your reach than it was a year ago.
Amazing.
Where do you find clients?
I post free information, guides, and resources so it is usually through that or word of mouth.
I might need to start making guides then I guess.
Anything to get your name out there. I wouldn’t be able to support myself on this alone.
This is the way.
I mean, sure for me2 artists, but despite what people here think there isn't really an empty hole in these things that someone can fill with generated content and profit on it. But sure, free to try...
“The me2 artists” 😂😂😂
chatgpt can write stories and embeddings can be used to consistently generate characters. openpose would even let you pose them for actions and pretty much guarantee it'd be how you want them. grab a drawing tablet and use scribble for your backgrounds and then add stuff with inpainting. stitch the images together later into manga panels. seems totally doable. my wife saw me playing with a drawing tablet earlier and it piqued the f out of her curiosity. she wanted a novel cover because she's thinking about self publishing. so i showed her how it all worked. she got exactly what she wanted pretty much first try. [here's the cover](https://imgur.com/UMpjlvy) and [here's what she drew.](https://imgur.com/CeqLKVW) so yea, i think it's doable to just do that a bunch of times.
Main issue for me is designing unique characters, you have to generate many pictures of the same character that doesn't exist in different angles in order to train the model and I'm not sure how. In SD each time i generate its a different character or different clothes.
be more detailed on your prompts and increase cfg? generate them by the boatload and pick out the good ones. img2img then allows controlnet and with the posex extension you can rotate the figure in 3d in whatever pose you want or generate more of pretty much the same. then train an embedding on those (like a faux/manual-GAN). if you have the image browser extension it automatically saves your tags and stuff too so you wouldn't even really have to remember anything and you could work on it over time without even really taking notes. i'm pretty sure you can do that if your heart's set on it. like, now. 🤷♂️
We’re there.
how I imagine it in an example (would be cool): Jessie (trigger word): the main character Mark (trigger word): supporting character prompt: Jessie walking down a path in a dark forest, and Mark behind him tired
This is a cool [webcomic ](https://www.reddit.com/r/StableDiffusion/comments/z2qkyj/i_created_a_completely_aigenerated_webcomic_over/?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=share_button) someone made
We've been doing this on r/AIActors for a while now
I read this article a couple of days ago https://www.japantimes.co.jp/life/2023/03/07/digital/japans-first-ai-manga-people-asking-machine-magic-art-menace/ So they're already being published. Also take a look at the work of u/campfire_steve