Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inpainting and img2img is desaturated/washed out #662

Open
chadb101 opened this issue Apr 25, 2024 · 10 comments
Open

Inpainting and img2img is desaturated/washed out #662

chadb101 opened this issue Apr 25, 2024 · 10 comments

Comments

@chadb101
Copy link

Every time I try inpainting or using img2img with this plugin the result washed out and the colors are way off. In ComfyUI I was able to prevent this from happening by decoding the source img in one VAE and encoding the result in another VAE so I am wondering if this is the issue.

@Sil3ntKn1ght
Copy link

Every time I try inpainting or using img2img with this plugin the result washed out and the colors are way off. In ComfyUI I was able to prevent this from happening by decoding the source img in one VAE and encoding the result in another VAE so I am wondering if this is the issue.

are you having the issue i am of only 6 samplers and the renders are bad to what they where on prior versions. mines a hot mess.

@Acly
Copy link
Owner

Acly commented Apr 26, 2024

This shouldn't happen with images originally generated with the same settings.

If you import them from somewhere (photos, other AI gens), then the SD checkpoint may not always be able to match the colors of the original and produce certain shifts. Usually only happens if the source materials is somewhat outside normal range.

In ComfyUI I was able to prevent this from happening by decoding the source img in one VAE and encoding the result in another VAE so I am wondering if this is the issue.

While that may have an impact and "fix" the issue, it would be more by accident, as it doesn't really make any sense. VAE encode/decode should be matched. Alternatively you can fix such issues more directly with HSV adjustment, color grading, and such.

Anyway it's hard to tell exactly what you by washed out without images, it's very subjective.

@chadb101
Copy link
Author

It looks like with any model.
-generate any image and apply it
-make a image control and set it to the generated image layer
-try inpainting on that generated image

It results in an inpaint or img2img that is washed out and unsaturated.

I read somewhere else that this might be a glitch with img2img and inpainting nodes not properly applying the VAEs but I don't know much about SD programming.

In ComfyUI though decoding with one VAE and encoding with another produces better results so allowing users to choose specifically separate encode and decode VAEs might be useful.

@Acly
Copy link
Owner

Acly commented Apr 28, 2024

Hm this is definitely not normal, and I doubt it's related to VAE as I get consistent results, both with default and custom VAE.

Hard to say more without images or a workflow file (you can enable "Dump Workflow" in interface settings and the ComfyUI workflow.json will be written to the logs folder)

@JeffreyBull76
Copy link

Same issue here, since recent updates all inpainting produces this halo effect. It has nothing to do with model matching, as you can generate an image in Krita with a model then inpaint onto it and it still produces this effect. No errors output in the cmd window and nothing seems to be obviously wrong. It just always produces this halo effect. Never had this issue before and have used your plugin for a long time now.

@Danamir
Copy link
Contributor

Danamir commented May 3, 2024

This will not fix the halo but, can you try to update the ImageScale calls to use lanczos instead of bilinear. Look for def scale_image( in comfy_workflow.py.

This should result in a both sharper and less aliased rendering on small inpaints because this is affecting the first upscale and the last downscale of the inpainting workflow.

I'm curious if your use case can also be improved by this code alteration.

Cf. #679

@Acly
Copy link
Owner

Acly commented May 3, 2024

I have a feeling lots of different things are being mixed up here. But it's always helpful to have images to talk about, because people often have very different interpretations of "washed out", "halo", "desaturated", etc.

Color & brightness shifts

These have always been an issue, and particularly obvious when you have large flat plain color surfaces without much detail. All regular SD1.5/SDXL models have a flaw where they converge to generating images with an average brightness (0.5). This is not obvious when there is some irregular detail in the image, dark areas can cancel out bright ones. But the examples by @catmino are prone to this issue, it's a relatively tiny area with not much going on, so the model is very constrained and can't hide the flaw.

There is also CosXL base model which doesn't have this limitation, but it's not very popular.

Blending & halo

Introduction of Differential Diffusion changed how images are blended. Because there is now a consistent, linear falloff in the denoising strength at mask edges, less alpha blending is required to make the transition smooth. At least in most cases. Large uniform areas which exhibit the color/brightness shift issues above seem to be an exception. Previous versions were hiding this issue better because of more alpha blending. See this dicussion for similar issue and guide on how to trade alpha vs denoise blending.

Minimum step count

This is difficult to get right, see also #483 #670 - making this configurable via sampling presets is fairly easy to implement now. Having good defaults is still important.

@catmino
Copy link

catmino commented May 3, 2024

I'm aware of the different problems with step counts - my original thoughts were that this was the cause of the issue, however I was wrong about that, even though in my use case increasing the step count has helped quite a bit, the issue persisted depending on the image. The blending discussion seems interesting, I've looked through the code and found that part myself but wasn't entirely sure how does it work and haven't gotten around to testing that part yet, but perhaps fiddling with those settings might help a bit with better results.

My question is though, why is this issue so apparent in Live Mode, and way less apparent in Refine mode, despite using same seed, no Seamless & no Focus in Generate, and using the exact same Preset for both modes?

The workflows are nearly identical aside from different upscale values (and different image values), could this be the cause of such different results? Increasing the Resolution multiplier to 1.1x seemed to give a bit less blurry results in Live Mode, however the affected area of the image was still smaller than in Refine mode with a way sharper cut-off (seam), whereas Refine had smoother blending and affected a larger area around the selection.

Is there a difference in how the context and image mask are selected in Live mode in comparison to Refine? I have also tested out Selection Bounds and Entire Image in Refine mode (obviously both giving different results) - but none seemed to give identical results to Live mode, and from what I've read in the Issues section, Selection Mask is currently bugged so I have disregarded that one.

Basically, whenever I try to generate something with Live mode I get these sharp seams, blurry image and color shifts, but doing the exact same thing in Refine mode gives way smoother seams (in a larger area), sharper image and more fitting colors, and while I would sometimes expect this behaviour from LCM - it just doesn't make sense with same Preset in both cases.

Another thing I forgot to mention is that I primarily tested everything with lossless .webp images - I have also noticed there were changes done to the formats, but from my understanding this shouldn't affect anything but the small images saved in generation history?

@Acly
Copy link
Owner

Acly commented May 3, 2024

Is there a difference in how the context and image mask are selected in Live mode in comparison to Refine?

Yes, the grow factor isn't applied in Live. Padding and feather are generally the same, but Live avoids scaling if area is too small and expands the context instead. It might be an effect of small area (-> large context) in combination with no inpaint model. Maybe try selecting an area in live that is at least 512 (SD1.5) / 1024 (SDXL). I'm not seeing those problems...

@Acly
Copy link
Owner

Acly commented May 14, 2024

1.17.2 allows setting the minimum step count now by editing sampler presets. It also uses a bit more alpha blending again.

I didn't change the default though, to me this is still inconclusive. I've tried a few more live scenarios, and generally get good results without color/brightness shifts at 4 steps. To investigate this more I'd really need a reproducible scenario (.kra file) with clear expectations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants