AI isn’t that magical. You get good images by continuing to feed it prompts and tweaking weights until it eventually makes something good. Lighting kinda suggests mid journey, but it could as easily be 1000 tries of stable diffusion.
I’ve not used StableD directly to create a specific person, or at least not popular/known ones, so haven’t had the benefit of a LoRA. I generally just do scenes that, if they have people, they’re just background fillers or any likeness will do.
I’m quite familiar with AI and how the prompts work. I’ve been using a couple for years. Each one seems to have its own particular (Skill? Emphasis? Personality?) and no matter what you feed it via prompts it may not overcome whatever it is that prevents it from generating the scene you want. That may depend on what it was trained on - say for instance you want a character from Labyrinth (1986) but instead the AI keeps forcing imagery from Pan’s Labyrinth and no amount of negative or other prompts will fix it. Some are deliberately designed to avoid things like accurate likenesses (some devs don’t want their AI used for deepfakes), some emphasize an art style, and some like StableD require tons of “corralling” to get it to generate what you want - if you’re lucky.
Stable diffusion is just the path to loading a model It really depends what model you use. You can’t get the right labyrinth out of something that wasn’t trained on the labyrinth and that you want.
AI isn’t that magical. You get good images by continuing to feed it prompts and tweaking weights until it eventually makes something good. Lighting kinda suggests mid journey, but it could as easily be 1000 tries of stable diffusion.
@RememberTheApollo_@lemmy.world
Also if it’s on Stable Diffusion there’s 100% a LoRA for those two available. So from that point it would be harder not to get their likeness :D
I’ve not used StableD directly to create a specific person, or at least not popular/known ones, so haven’t had the benefit of a LoRA. I generally just do scenes that, if they have people, they’re just background fillers or any likeness will do.
I’m quite familiar with AI and how the prompts work. I’ve been using a couple for years. Each one seems to have its own particular (Skill? Emphasis? Personality?) and no matter what you feed it via prompts it may not overcome whatever it is that prevents it from generating the scene you want. That may depend on what it was trained on - say for instance you want a character from Labyrinth (1986) but instead the AI keeps forcing imagery from Pan’s Labyrinth and no amount of negative or other prompts will fix it. Some are deliberately designed to avoid things like accurate likenesses (some devs don’t want their AI used for deepfakes), some emphasize an art style, and some like StableD require tons of “corralling” to get it to generate what you want - if you’re lucky.
Stable diffusion is just the path to loading a model It really depends what model you use. You can’t get the right labyrinth out of something that wasn’t trained on the labyrinth and that you want.