Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ImportError: cannot import name '_compare_version' from 'torchmetrics.utilities.imports' #595

Open
2 tasks done
dinandmentink opened this issue Oct 4, 2023 · 15 comments · May be fixed by #596
Open
2 tasks done
Labels
bug Something isn't working

Comments

@dinandmentink
Copy link

dinandmentink commented Oct 4, 2023

Has this issue been opened before?

  • It is not in the FAQ, I checked.
  • It is not in the issues, I searched.

Describe the bug

I think this is just me. But running into an issue and hoping someone else knows how to fix it. Can't find the issue online. Anyway. Stable diffusion web-ui has been working perfectly on my machine for ages, but recently tried to start it again and am getting stuck with this error:

webui-docker-invoke-1  | Traceback (most recent call last):
webui-docker-invoke-1  |   File "/opt/conda/bin/invokeai-configure", line 5, in <module>
webui-docker-invoke-1  |     from ldm.invoke.config.invokeai_configure import main
webui-docker-invoke-1  |   File "/InvokeAI/ldm/invoke/config/invokeai_configure.py", line 40, in <module>
webui-docker-invoke-1  |     from ..args import PRECISION_CHOICES, Args
webui-docker-invoke-1  |   File "/InvokeAI/ldm/invoke/args.py", line 100, in <module>
webui-docker-invoke-1  |     from ldm.invoke.conditioning import split_weighted_subprompts
webui-docker-invoke-1  |   File "/InvokeAI/ldm/invoke/conditioning.py", line 18, in <module>
webui-docker-invoke-1  |     from .generator.diffusers_pipeline import StableDiffusionGeneratorPipeline
webui-docker-invoke-1  |   File "/InvokeAI/ldm/invoke/generator/__init__.py", line 4, in <module>
webui-docker-invoke-1  |     from .base import Generator
webui-docker-invoke-1  |   File "/InvokeAI/ldm/invoke/generator/base.py", line 21, in <module>
webui-docker-invoke-1  |     from pytorch_lightning import seed_everything
webui-docker-invoke-1  |   File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/__init__.py", line 34, in <module>
webui-docker-invoke-1  |     from pytorch_lightning.callbacks import Callback  # noqa: E402
webui-docker-invoke-1  |   File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/callbacks/__init__.py", line 25, in <module>
webui-docker-invoke-1  |     from pytorch_lightning.callbacks.progress import ProgressBarBase, RichProgressBar, TQDMProgressBar
webui-docker-invoke-1  |   File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/callbacks/progress/__init__.py", line 22, in <module>
webui-docker-invoke-1  |     from pytorch_lightning.callbacks.progress.rich_progress import RichProgressBar  # noqa: F401
webui-docker-invoke-1  |   File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/callbacks/progress/rich_progress.py", line 20, in <module>
webui-docker-invoke-1  |     from torchmetrics.utilities.imports import _compare_version
webui-docker-invoke-1  | ImportError: cannot import name '_compare_version' from 'torchmetrics.utilities.imports' (/opt/conda/lib/python3.10/site-packages/torchmetrics/utilities/imports.py)

Which UI

invoke

Hardware / Software

  • OS: Linux Mint
  • OS version: 21.1
  • WSL version (if applicable):
  • Docker Version: 24.0.6
  • Docker compose version: v2.21.0
  • Repo version: master 6a34739135eb112667f00943c1fac98ab294716a
  • RAM: 48GB
  • GPU/VRAM: nvidia 2060

Steps to Reproduce

docker compose --profile download up --build
docker compose --profile invoke up --build

Additional context

I already made a fresh clone and cleaned my docker containers (docker system prune -a). So this is a completely fresh build.

Anyone run into this? Any pointers?

Over at invoke ai I found something about installing torchmetrics v0.11.4 (invoke-ai/InvokeAI#3658). Is this something I can configure with an env var?

@dinandmentink dinandmentink added the bug Something isn't working label Oct 4, 2023
@dinandmentink dinandmentink linked a pull request Oct 4, 2023 that will close this issue
@jaredquekjz
Copy link

jaredquekjz commented Oct 7, 2023

Yup just faced the exact same issue with the Invoke profile. Changed to auto profile for now - which works.

@DoiiarX
Copy link

DoiiarX commented Oct 19, 2023

me,too.
image

@cyril23
Copy link

cyril23 commented Nov 4, 2023

Same here. Steps to reproduce:

git clone https://github.com/AbdBarho/stable-diffusion-webui-docker.git
cd stable-diffusion-webui-docker/
docker compose --profile download up --build
docker compose --profile invoke up --build

Using a Amazon g5.xlarge instance (NVIDIA A10G Tensor-Core-GPU) with an EC2 Deep Learning Base GPU AMI (Ubuntu 20.04) 20231026 (ami-0d134e01570c1e7b4)

$ docker --version
Docker version 24.0.6, build ed223bc
$ uname -a
Linux ip-xxx-xxx-xxx-xxx 5.15.0-1048-aws #53~20.04.1-Ubuntu SMP Wed Oct 4 16:44:20 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

I've also tried git checkout tags/8.1.0, but got the same error

:~/stable-diffusion-webui-docker$ docker compose --profile invoke up --build
[+] Building 0.7s (17/17) FINISHED                                                                                                            docker:default
 => [invoke internal] load build definition from Dockerfile                                                                                             0.0s
 => => transferring dockerfile: 1.99kB                                                                                                                  0.0s
 => [invoke internal] load .dockerignore                                                                                                                0.0s
 => => transferring context: 2B                                                                                                                         0.0s
 => [invoke internal] load metadata for docker.io/pytorch/pytorch:2.0.1-cuda11.7-cudnn8-runtime                                                         0.6s
 => [invoke internal] load metadata for docker.io/library/alpine:3.17                                                                                   0.7s
 => [invoke stage-1 1/8] FROM docker.io/pytorch/pytorch:2.0.1-cuda11.7-cudnn8-runtime@sha256:82e0d379a5dedd6303c89eda57bcc434c40be11f249ddfadfd5673b84  0.0s
 => [invoke internal] load build context                                                                                                                0.0s
 => => transferring context: 65B                                                                                                                        0.0s
 => [invoke xformers 1/3] FROM docker.io/library/alpine:3.17@sha256:f71a5f071694a785e064f05fed657bf8277f1b2113a8ed70c90ad486d6ee54dc                    0.0s
 => CACHED [invoke stage-1 2/8] RUN --mount=type=cache,target=/var/cache/apt   apt-get update &&   apt-get install make g++ git libopencv-dev -y &&     0.0s
 => CACHED [invoke stage-1 3/8] RUN git clone https://github.com/invoke-ai/InvokeAI.git /InvokeAI                                                       0.0s
 => CACHED [invoke stage-1 4/8] WORKDIR /InvokeAI                                                                                                       0.0s
 => CACHED [invoke stage-1 5/8] RUN --mount=type=cache,target=/root/.cache/pip   git reset --hard f3b2e02921927d9317255b1c3811f47bd40a2bf9 &&   pip in  0.0s
 => CACHED [invoke stage-1 6/8] RUN --mount=type=cache,target=/root/.cache/pip   git fetch &&   git reset --hard &&   git checkout main &&   git reset  0.0s
 => CACHED [invoke xformers 2/3] RUN apk add --no-cache aria2                                                                                           0.0s
 => CACHED [invoke xformers 3/3] RUN aria2c -x 5 --dir / --out wheel.whl 'https://github.com/AbdBarho/stable-diffusion-webui-docker/releases/download/  0.0s
 => CACHED [invoke stage-1 7/8] RUN --mount=type=cache,target=/root/.cache/pip   --mount=type=bind,from=xformers,source=/wheel.whl,target=/xformers-0.  0.0s
 => CACHED [invoke stage-1 8/8] COPY . /docker/                                                                                                         0.0s
 => [invoke] exporting to image                                                                                                                         0.0s
 => => exporting layers                                                                                                                                 0.0s
 => => writing image sha256:effc0d511e7589ea6981692f8685c58396379348bbc89cd8adac14bb4191848d                                                            0.0s
 => => naming to docker.io/library/sd-invoke:30                                                                                                         0.0s
[+] Running 1/0
 ✔ Container webui-docker-invoke-1  Created                                                                                                             0.0s
Attaching to webui-docker-invoke-1
webui-docker-invoke-1  | Mounted ldm
webui-docker-invoke-1  | Mounted .cache
webui-docker-invoke-1  | Mounted RealESRGAN
webui-docker-invoke-1  | Mounted Codeformer
webui-docker-invoke-1  | Mounted GFPGAN
webui-docker-invoke-1  | Mounted GFPGANv1.4.pth
webui-docker-invoke-1  | Loading Python libraries...
webui-docker-invoke-1  |
webui-docker-invoke-1  | Traceback (most recent call last):
webui-docker-invoke-1  |   File "/opt/conda/bin/invokeai-configure", line 5, in <module>
webui-docker-invoke-1  |     from ldm.invoke.config.invokeai_configure import main
webui-docker-invoke-1  |   File "/InvokeAI/ldm/invoke/config/invokeai_configure.py", line 40, in <module>
webui-docker-invoke-1  |     from ..args import PRECISION_CHOICES, Args
webui-docker-invoke-1  |   File "/InvokeAI/ldm/invoke/args.py", line 100, in <module>
webui-docker-invoke-1  |     from ldm.invoke.conditioning import split_weighted_subprompts
webui-docker-invoke-1  |   File "/InvokeAI/ldm/invoke/conditioning.py", line 18, in <module>
webui-docker-invoke-1  |     from .generator.diffusers_pipeline import StableDiffusionGeneratorPipeline
webui-docker-invoke-1  |   File "/InvokeAI/ldm/invoke/generator/__init__.py", line 4, in <module>
webui-docker-invoke-1  |     from .base import Generator
webui-docker-invoke-1  |   File "/InvokeAI/ldm/invoke/generator/base.py", line 21, in <module>
webui-docker-invoke-1  |     from pytorch_lightning import seed_everything
webui-docker-invoke-1  |   File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/__init__.py", line 34, in <module>
webui-docker-invoke-1  |     from pytorch_lightning.callbacks import Callback  # noqa: E402
webui-docker-invoke-1  |   File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/callbacks/__init__.py", line 25, in <module>
webui-docker-invoke-1  |     from pytorch_lightning.callbacks.progress import ProgressBarBase, RichProgressBar, TQDMProgressBar
webui-docker-invoke-1  |   File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/callbacks/progress/__init__.py", line 22, in <module>
webui-docker-invoke-1  |     from pytorch_lightning.callbacks.progress.rich_progress import RichProgressBar  # noqa: F401
webui-docker-invoke-1  |   File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/callbacks/progress/rich_progress.py", line 20, in <module>
webui-docker-invoke-1  |     from torchmetrics.utilities.imports import _compare_version
webui-docker-invoke-1  | ImportError: cannot import name '_compare_version' from 'torchmetrics.utilities.imports' (/opt/conda/lib/python3.10/site-packages/torchmetrics/utilities/imports.py)
webui-docker-invoke-1 exited with code 1
:~/stable-diffusion-webui-docker$ 

@AbdBarho
Copy link
Owner

#596

@AbdBarho AbdBarho linked a pull request Nov 13, 2023 that will close this issue
@gitwittidbit
Copy link

Hmm, if I understand this correctly, this error should not be happening anymore, right?

Well, I just ran into it with a fresh install today.

@ap1969
Copy link

ap1969 commented Dec 30, 2023

Same here with invoke

@aravindprasad
Copy link

Ditto here on a local install following instructions. And the exact same procedure described above to reproduce it

@whereismyfun42
Copy link

Anyone still getting this error - you can modify the dockerfile at \stable-diffusion-webui-docker\services\invoke\Dockerfile by adding a line:
WORKDIR ${ROOT}

  • RUN pip install torchmetrics==0.11.4

RUN --mount=type=cache,target=/root/.cache/pip
git reset --hard f3b2e02921927d9317255b1c3811f47bd40a2bf9 &&
pip install -e .

@gitwittidbit
Copy link

  • RUN pip install torchmetrics==0.11.4

Brilliant - thanks!

But now I'm facing another error: cannot import name 'ModelSearchArguments' from 'huggingface_hub'

@TIMESTICKING
Copy link

  • RUN pip install torchmetrics==0.11.4

same here, did you solve it?

@TIMESTICKING
Copy link

Yup just faced the exact same issue with the Invoke profile. Changed to auto profile for now - which works.

Hi, I wonder if the already downloaded files will be removed automaticlly whenI switch "invoke" to "auto", because it is kinda waste of space since I would never use "invoke" again. Or how should I removed the files that related to the "invoke" mode. THX

@kkthebeast
Copy link

Describe the bug

[+] Running 1/1
 ✔ Container webui-docker-invoke-1  Created                                                                                                                                                 0.1s
Attaching to invoke-1
invoke-1  | mkdir: created directory '/data/.cache/invoke'
invoke-1  | mkdir: created directory '/data/.cache/invoke/ldm/'
invoke-1  | Mounted ldm
invoke-1  | Mounted .cache
invoke-1  | Mounted RealESRGAN
invoke-1  | mkdir: created directory '/data/models/Codeformer/'
invoke-1  | Mounted Codeformer
invoke-1  | Mounted GFPGAN
invoke-1  | Mounted GFPGANv1.4.pth
invoke-1  | Loading Python libraries...
invoke-1  |
invoke-1  | Traceback (most recent call last):
invoke-1  |   File "/opt/conda/bin/invokeai-configure", line 5, in <module>
invoke-1  |     from ldm.invoke.config.invokeai_configure import main
invoke-1  |   File "/InvokeAI/ldm/invoke/config/invokeai_configure.py", line 40, in <module>
invoke-1  |     from ..args import PRECISION_CHOICES, Args
invoke-1  |   File "/InvokeAI/ldm/invoke/args.py", line 100, in <module>
invoke-1  |     from ldm.invoke.conditioning import split_weighted_subprompts
invoke-1  |   File "/InvokeAI/ldm/invoke/conditioning.py", line 18, in <module>
invoke-1  |     from .generator.diffusers_pipeline import StableDiffusionGeneratorPipeline
invoke-1  |   File "/InvokeAI/ldm/invoke/generator/__init__.py", line 4, in <module>
invoke-1  |     from .base import Generator
invoke-1  |   File "/InvokeAI/ldm/invoke/generator/base.py", line 21, in <module>
invoke-1  |     from pytorch_lightning import seed_everything
invoke-1  |   File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/__init__.py", line 34, in <module>
invoke-1  |     from pytorch_lightning.callbacks import Callback  # noqa: E402
invoke-1  |   File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/callbacks/__init__.py", line 25, in <module>
invoke-1  |     from pytorch_lightning.callbacks.progress import ProgressBarBase, RichProgressBar, TQDMProgressBar
invoke-1  |   File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/callbacks/progress/__init__.py", line 22, in <module>
invoke-1  |     from pytorch_lightning.callbacks.progress.rich_progress import RichProgressBar  # noqa: F401
invoke-1  |   File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/callbacks/progress/rich_progress.py", line 20, in <module>
invoke-1  |     from torchmetrics.utilities.imports import _compare_version
invoke-1  | ImportError: cannot import name '_compare_version' from 'torchmetrics.utilities.imports' (/opt/conda/lib/python3.10/site-packages/torchmetrics/utilities/imports.py)
invoke-1 exited with code 1 

Which UI

invoke

Hardware / Software

  • OS: Windows 11 Pro

  • OS version: Version 23H2 (OS Build 22631.3296)

  • WSL version
    NAME STATE VERSION

    • docker-desktop Running 2
      docker-desktop-data Running 2
  • Docker Version:
    Client:
    Cloud integration: v1.0.35+desktop.11
    Version: 25.0.3
    API version: 1.44
    Go version: go1.21.6
    Git commit: 4debf41
    Built: Tue Feb 6 21:13:02 2024
    OS/Arch: windows/amd64
    Context: default
    Server: Docker Desktop 4.28.0 (139021)
    Engine:
    Version: 25.0.3
    API version: 1.44 (minimum version 1.24)
    Go version: go1.21.6
    Git commit: f417435
    Built: Tue Feb 6 21:14:25 2024
    OS/Arch: linux/amd64
    Experimental: false
    containerd:
    Version: 1.6.28
    GitCommit: ae07eda36dd25f8a1b98dfbf587313b99c0190bb
    runc:
    Version: 1.1.12
    GitCommit: v1.1.12-0-g51d5e94
    docker-init:
    Version: 0.19.0
    GitCommit: de40ad0

  • Docker compose version: Docker Compose version v2.24.6-desktop.1

  • Repo version: 1.2.0

  • RAM: 64GB

  • GPU/VRAM: RTX 4090/24GB

Steps to Reproduce

  1. run setup commands"
docker compose --profile download up --build
docker compose --profile invoke up --build
  1. Error:
invoke-1  | ImportError: cannot import name '_compare_version' from 'torchmetrics.utilities.imports' (/opt/conda/lib/python3.10/site-packages/torchmetrics/utilities/imports.py)
invoke-1 exited with code 1
  1. Container runs for 47 seconds and stops

Additional context
new install with

git clone https://github.com/AbdBarho/stable-diffusion-webui-docker.git

image

@mynameiskeen
Copy link

  • RUN pip install torchmetrics==0.11.4

Brilliant - thanks!

But now I'm facing another error: cannot import name 'ModelSearchArguments' from 'huggingface_hub'

huggingface_hub removed 'ModelSearchArguments' from version v0.19, you can remove the current version and re-install v0.18.0 or below, please note that you also need to lower the version of transformers to v4.35. Otherwise you will get "AttributeError: module 'huggingface_hub.constants' has no attribute 'HF_HUB_CACHE'" error again.

I solved this issue by adding :

RUN --mount=type=cache,target=/root/.cache/pip \
  pip uninstall -y torchmetrics && \
  pip install torchmetrics==0.11.4 && \
  pip uninstall -y huggingface-hub && \
  pip install huggingface-hub==0.18.0 && \
  pip uninstall -y transformers && \
  pip install transformers==4.35.2

@dakyskye
Copy link

dakyskye commented Apr 1, 2024

Welp @mynameiskeen , your changes seems to almost solve the problem in my case 😅 now I got this:

invoke-1  | * --web was specified, starting web server...
invoke-1  | Traceback (most recent call last):
invoke-1  |   File "/opt/conda/bin/invokeai", line 8, in <module>
invoke-1  |     sys.exit(main())
invoke-1  |   File "/InvokeAI/ldm/invoke/CLI.py", line 184, in main
invoke-1  |     invoke_ai_web_server_loop(gen, gfpgan, codeformer, esrgan)
invoke-1  |   File "/InvokeAI/ldm/invoke/CLI.py", line 1078, in invoke_ai_web_server_loop
invoke-1  |     from invokeai.backend import InvokeAIWebServer
invoke-1  |   File "/InvokeAI/invokeai/backend/__init__.py", line 4, in <module>
invoke-1  |     from .invoke_ai_web_server import InvokeAIWebServer
invoke-1  |   File "/InvokeAI/invokeai/backend/invoke_ai_web_server.py", line 17, in <module>
invoke-1  |     from flask import Flask, redirect, send_from_directory, request, make_response
invoke-1  |   File "/opt/conda/lib/python3.10/site-packages/flask/__init__.py", line 7, in <module>
invoke-1  |     from .app import Flask as Flask
invoke-1  |   File "/opt/conda/lib/python3.10/site-packages/flask/app.py", line 27, in <module>
invoke-1  |     from . import cli
invoke-1  |   File "/opt/conda/lib/python3.10/site-packages/flask/cli.py", line 17, in <module>
invoke-1  |     from .helpers import get_debug_flag
invoke-1  |   File "/opt/conda/lib/python3.10/site-packages/flask/helpers.py", line 14, in <module>
invoke-1  |     from werkzeug.urls import url_quote
invoke-1  | ImportError: cannot import name 'url_quote' from 'werkzeug.urls' (/opt/conda/lib/python3.10/site-packages/werkzeug/urls.py)
invoke-1  | Exception ignored in atexit callback: <built-in function write_history_file>
invoke-1  | FileNotFoundError: [Errno 2] No such file or directory
invoke-1 exited with code 1

@mynameiskeen
Copy link

Welp @mynameiskeen , your changes seems to almost solve the problem in my case 😅 now I got this:

invoke-1  | * --web was specified, starting web server...
invoke-1  | Traceback (most recent call last):
invoke-1  |   File "/opt/conda/bin/invokeai", line 8, in <module>
invoke-1  |     sys.exit(main())
invoke-1  |   File "/InvokeAI/ldm/invoke/CLI.py", line 184, in main
invoke-1  |     invoke_ai_web_server_loop(gen, gfpgan, codeformer, esrgan)
invoke-1  |   File "/InvokeAI/ldm/invoke/CLI.py", line 1078, in invoke_ai_web_server_loop
invoke-1  |     from invokeai.backend import InvokeAIWebServer
invoke-1  |   File "/InvokeAI/invokeai/backend/__init__.py", line 4, in <module>
invoke-1  |     from .invoke_ai_web_server import InvokeAIWebServer
invoke-1  |   File "/InvokeAI/invokeai/backend/invoke_ai_web_server.py", line 17, in <module>
invoke-1  |     from flask import Flask, redirect, send_from_directory, request, make_response
invoke-1  |   File "/opt/conda/lib/python3.10/site-packages/flask/__init__.py", line 7, in <module>
invoke-1  |     from .app import Flask as Flask
invoke-1  |   File "/opt/conda/lib/python3.10/site-packages/flask/app.py", line 27, in <module>
invoke-1  |     from . import cli
invoke-1  |   File "/opt/conda/lib/python3.10/site-packages/flask/cli.py", line 17, in <module>
invoke-1  |     from .helpers import get_debug_flag
invoke-1  |   File "/opt/conda/lib/python3.10/site-packages/flask/helpers.py", line 14, in <module>
invoke-1  |     from werkzeug.urls import url_quote
invoke-1  | ImportError: cannot import name 'url_quote' from 'werkzeug.urls' (/opt/conda/lib/python3.10/site-packages/werkzeug/urls.py)
invoke-1  | Exception ignored in atexit callback: <built-in function write_history_file>
invoke-1  | FileNotFoundError: [Errno 2] No such file or directory
invoke-1 exited with code 1

Try add this:
pip uninstall -y Werkzeug &&
pip install Werkzeug==2.2.2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.