Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Invoke 3.0 Dockerfile #577

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open

Invoke 3.0 Dockerfile #577

wants to merge 3 commits into from

Conversation

oddomatik
Copy link

@oddomatik oddomatik commented Aug 25, 2023

This should be considered a WIP, but it's an attempt at InvokeAI's v3.0 integration into the current workflow. It will need to be reviewed for conformity with @AbdBarho's standards for this repo, but I tried to symlink in directories in a way that conforms with existing structure. There may be better ways to do this, such as disabling automatic downloads and copying files around, but this is the best I could do for now, perhaps it will be useful as a starting point. It is entirely functional for me.

The Dockerfile here is based on the upstream InvokeAI's Dockerfile.

The main changes consist of pulling in the invoke repo instead of building from it, as well as symlinks to their proper place. CPU support (as you do with A1111) is easily added if docker-compose from this repo is modified to pass GPU_DRIVER.

Due to the way InvokeAI runs, it will download some "core" models and support files while providing very little debugging output. First run took over an hour of downloading upscale models and other supporting files.

Closes issue #555 #553

Update versions

InvokeAI 3.0 has a completely different structure and tries to auto-download everything. There may be better solutions for this such as skipping model downloads or similar, but after initial run the container should start quickly. Most of the symlinks should capture the models roughly into the existing structure used in stable-diffusion-webui-docker repo but further testing should be done. Docker file is based on upstream invokeai's Dockerfile
invokeai converts models into diffuser format and caches them. We don't want to have to regenerate this cache each time we rebuild container, but it will take a LOT of space on the local drive.
@AbdBarho
Copy link
Owner

Thank you for doing this. I wanted to update to 3.0 for a while but the invoke team changed a lot and because it all supposed to be automated away it has to find how to fit the changes into the current structure.

I haven't took a deeper look yet, I will come back to this later.

FROM alpine:3.17 as xformers
RUN apk add --no-cache aria2
RUN aria2c -x 5 --dir / --out wheel.whl 'https://github.com/AbdBarho/stable-diffusion-webui-docker/releases/download/6.0.0/xformers-0.0.21.dev544-cp310-cp310-manylinux2014_x86_64-pytorch201.whl'
# syntax=docker/dockerfile:1.4
Copy link
Owner

@AbdBarho AbdBarho Aug 26, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We might need to remove this since it is not part of the official OCI image format and is incompatible with podman. or is it now?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you referring to the syntax statement? I have no familiarity with podman. I intended this commit to be a jumping off point, if it helped. I am happy to dig in a bit further if needed, but I think it will be necessary to tag team it.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah the syntax statement, but no worries, I want to dig deeper here anyway since a lot of stuff changed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants