>>69938634Alright, here's the advice I can offer for AI.
1. Get A111:
https://github.com/AUTOMATIC1111/stable-diffusion-webui/2. Follow the metadata. Under the PNG Info tab, if you load a PNG that doesn't have the metadata stripped from it, it'll show you the prompt used to make it, the seed, and any of the models used. You can usually google and download those. Checkpoints go in the models/Stable-diffusion/ folder, Loras go in the models/Lora/ folder. Look for AI anons posting catbox links in here or other AI threads. If you hit "Send to txt2img" it'll copy the settings over, which you definitely want to use heavily.
3. Checkpoints are your "main" model. If you use an older model, it's going to struggle a lot more than if you use a newer model. Loras on the other hand are anciliary models, sometimes for styles, concepts (memes), or characters.
4. One extension I would highly recommend that isn't going to show up in metadata is
https://github.com/DominikDoom/a1111-sd-webui-tagcomplete This will help you avoid using terms that aren't actually danbooru tags, and thus don't have any training data behind them.
6. Remember that the models don't literally understand what tags mean - it just learns features from images tagged with those tags. Usually that means literally the features that the tags describe, but often there's vestigial features the model learns
7. Negative prompts are very important. Quality tags are also important. It's kind of funny that basically telling the AI to make better art causes it to make better art, but it is what it is.
8. You can scale the strength of tags by doing something like this: (some tag:1.2) for stronger or (some tag:0.8) for bit weaker, or more or less depending on what you need. Too little strength, the tag gets ignored, too much and it causes very strange distortions. Usually the default is fine, but you will have to play with it for some tags.
There is more to learn, but those are the most important things to get started, I would say. If you need more help, feel free to ask here or in the discord. I really can't understate learning from the metadata of others, though, especially when it comes to upscaling settings, sampling methods, etc.
Here's the catbox for that one, though for some reason I didn't use my usual settings with it.
https://files.catbox.moe/xqdo85.pnghttps://files.catbox.moe/agm8xq.png