The generations I currently do has the following workflow:
Start:
- Text based image generation to find a good base image
- Take output and scale it up 4x to get more pixels to work with
Circular part:
- Image2Image inpaint mask, inpaint mask only with 1024x1024 to achieve more detail
- Change text-prompt or add LORA to get specific results for the masked area
- Save the resulting image, remove the mask and add another masked area on the new “updated” image
End:
- When results are good enough, scale down by 2x and save the image
I can get the Start and End working, but the circular part where the output image is used as input doesn’t seem to work in ComfyUI however I try to make it work without doing tons of manual steps.
Right now in Automatic1111 UI I can basically just send the output image to the next tab, or back to the inpaint image input when I’m happy with one of the masked results.
I’d love to be able to use the output as input in ComfyUI as well if that is possible.
I"ve been wondering if there is a way to select a folder as input. That way i could automate all kinds of image edits like resizing or up scaling.
If this is possible you might be able to select the output folder as input.
Yes! Something like this would make the worlds difference really as there are already nodes that can output to a specified folder with a prefix.
deleted by creator
Today I learned that you can Copy/Paste from preview to Clipspace. This means that I can have one piece of the workflow that only resizes the image, then right click, copy and then paste to another Load image node.
I also found this inpainting workflow that I am modifying to fit my needs: https://civitai.com/articles/2782/comfyui-inpaint-only-mask-area
This tutorial for inpainting was also great with different ways of masking: https://www.youtube.com/watch?v=9jB2271-iEE
deleted by creator