Welcome to Inkbunny...
Allowed ratings
To view member-only content, create an account. ( Hide )
Soph

How do I do it?

by
Some people keep asking how I do things that I upload. I decided to tell it as a journal to have a direct link for the answer. I don't think that the journal itself will reduce how many times people will ask it, but it will help me to answer it more easily.

Tools and models

Usually I use the Yiffymix3 model from civitai.com. Sometimes I experiment with some other models like PonyDiffusion, but I like Y3 more, because it had the original SD model inside and this allows me to use original SD artists who died 25+ years ago.
Anyway I either attach a description file or paste the prompt with the submission, you can always find the used model name and hash there.

As for the generation tool I use Automatic1111. For the people who have no idea how the things work - it doesn't matter which tool you use, because it's generated with the model. Any parameters I put into automatic just get transferred into the model, loaded on my PC. You can use any tool that you like, DreamStudio, Hugging Face or any other tool.

The process

Usually all my generations are the hybrid of ai-generated and ai-assisted. I think I need to explain the process a bit more.

When I have an idea of doing things with some character that is well-known the first thing I do it just putting the character name into the input prompt without any additional tags. I don't care about the quality or pose. I just need to know if my model is aware of this character and had enough arts pushed inside to make a copy of it.

For example,

I tried "Gadget Hackwrench" and got the mouse that looks like her. But "Chip" doesn't do the trick. So I had to give up adding him to the pics with her.

For the original characters like

I skip this step as I do know that this model is aware of foxes and some tags I tested earlier.

Sometimes I use the scripts to generate "X/Y/Z plot" (included in Automatic1111) and enter the names of the characters that were used to generate this model (the creators provide the list).

After I decided which character I'm going to use I think of the pose, fill the prompt and generate 1-2 test images of low quality just to be sure that the model understands my pose description and the surroundings. If everything is okay I go to the next step.

Then I generate 1-4 batches by 8 images with the size of 512x512 pixels. If I see the one I like I choose it and use img2img for 1-2 batches by 2 images on it with the size of (usually) 1024x1024 pixels, same prompt, same parameters and same seed where possible.

Usually there are a lot of things that need to be changed. Like here's the Gadget's picture marked points that were completely redrawn:


After that I use img2img tool again with the same parameters, but the CFG scale is somewhere around 0.15-0.20 just to hide the rough brush or the traces of editing.

Aaand... it's all. That's how I do it. Sometimes I use parts of the initial batches to get objects or background parts from there. Like photobashing but only using the pictures that were generated with the same model, because:

a) All the parts shoud be made with the same AI tool because of the copyrights for the random pic from the internet.
b) When using img2img the model will apply own style and if the object (like a cup on the table) "doesn't look right" the model will change the object to something it recognized and will redraw it like a flower or hand.

Viewed: 71 times
Added: 2 months, 1 week ago
 
New Comment:
Move reply box to top
Log in or create an account to comment.