Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

This feature is interesting, but the results seem a bit disappointing, what is the difference between it and foofocus zoom #22

Open
chenpipi0807 opened this issue Dec 10, 2023 · 4 comments

Comments

@chenpipi0807
Copy link

Her similarity to the original is too low. I thought I could use it to enlarge my girlfriend's photo. Is it more like how a refiner works?

@RuoyiDu
Copy link
Member

RuoyiDu commented Dec 11, 2023

Hi @chenpipi0807, thanks for your interest. As I mentioned in many places, DemoFusion is proposed for high-resolution generation. And a potential application is people can use a real image as the initialization. However, it's still a generation process, and the generated results strongly corresponds to SDXL's prior knowledge.

For your needs, you can seek help from super-resolution (SR) methods. And SR is exactly the concept that we avoid using to prevent such misinformation to our readers. I'm also bummed that there seems to be such misunderstanding on social media right now about the motivation of our work :(

@zhanghongyong123456
Copy link

Is it possible to add some new content and more details on the basis of the original image, I found that the image super resolution (SR) can not be implemented to add more details or add very little details,
The super resolution added too little detail, and our project changed the original image too much,

@Yggdrasil-Engineering
Copy link

Providing some sample outputs I've made here to give out a good real-world example for anyone curious what this is useful for.

Simply put, the level of detail that I'm able to get out of this pipeline is amazing! But I'm generating from new idea's and concepts. While I can do img2img, and control-net with it, I'll never get the original with more details because of what it is (a generation process, like @RuoyiDu mentioned earlier).

I wouldn't say that the results are disappointing at all! I don't intend to run defense, but when used as intended the results are absolutely astounding. I do hope the confusion that's propagating on social media gets a bit more quiet. When used to generate new (or derivative works using control-net) I haven't found anything that can output at this resolution with this level of detail.

There's even a pipeline I use to optimize generated images. Thanks to its 3 step output process I can upscale the smaller generations to help repair oddities or replications that might get generated on the last step, if I even need to. It's no hyperbole to say that it's revolutionized how I'm approaching and creating generative AI images!

Example Images
Note: Some are raw/unedited, and some have minor amounts of post-processing via upscaling and simple/quick layer-overlay techniques.

048520A7-D5E9-4CEA-8326-ED2A11F0C71C
image_2_20
image_2_16
image_2_8

@gladzhang
Copy link

good work

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants