Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Torch not compiled with CUDA enabled #224

Open
2 tasks done
derodz opened this issue Aug 16, 2023 · 9 comments
Open
2 tasks done

[Bug]: Torch not compiled with CUDA enabled #224

derodz opened this issue Aug 16, 2023 · 9 comments
Labels
bug Something isn't working

Comments

@derodz
Copy link

derodz commented Aug 16, 2023

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Are you using the latest version of the extension?

  • I have the modelscope text2video extension updated to the lastest version and I still have the issue.

What happened?

I completed a basic install of the web extension but cannot produce any video.

Steps to reproduce the problem

  1. Install per instructions
  2. Select Modelscope
  3. Attempt to generate video with any prompt

What should have happened?

The extension should work on Mac and produce videos.

WebUI and Deforum extension Commit IDs

webui commit id - 68f336b
txt2vid commit id - 20ead10

Torch version

2.0.1

What GPU were you using for launching?

M1 Max (used --use-cpu command line argument)

On which platform are you launching the webui backend with the extension?

Local PC setup (Mac)

Settings

image

Console logs

(base) user1@MBP ~ % cd ~/stable-diffusion-webui;./webui.sh

################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye)
################################################################

################################################################
Running on user1 user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
Create and activate python venv
################################################################

################################################################
Launching launch.py...
################################################################
Python 3.10.12 (main, Jun 20 2023, 19:43:52) [Clang 14.0.3 (clang-1403.0.22.14.1)]
Version: v1.5.1
Commit hash: 68f336bd994bed5442ad95bad6b6ad5564a5409a

Launching Web UI with arguments: --skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Loading weights [6ce0161689] from /Users/user1/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
Creating model from config: /Users/user1/stable-diffusion-webui/configs/v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 4.0s (launcher: 0.2s, import torch: 1.4s, import gradio: 0.5s, setup paths: 0.5s, other imports: 0.4s, load scripts: 0.4s, reload hypernetworks: 0.1s, create ui: 0.4s).
Applying attention optimization: InvokeAI... done.
Model loaded in 2.4s (load weights from disk: 0.2s, create model: 0.8s, apply weights to model: 0.5s, apply half(): 0.2s, move model to device: 0.5s, calculate empty prompt: 0.1s).
text2video — The model selected is: ModelScope (ModelScope-like)
 text2video extension for auto1111 webui
Git commit: 20ead103
Starting text2video
Pipeline setup
config namespace(framework='pytorch', task='text-to-video-synthesis', model={'type': 'latent-text-to-video-synthesis', 'model_args': {'ckpt_clip': 'open_clip_pytorch_model.bin', 'ckpt_unet': 'text2video_pytorch_model.pth', 'ckpt_autoencoder': 'VQGAN_autoencoder.pth', 'max_frames': 16, 'tiny_gpu': 1}, 'model_cfg': {'unet_in_dim': 4, 'unet_dim': 320, 'unet_y_dim': 768, 'unet_context_dim': 1024, 'unet_out_dim': 4, 'unet_dim_mult': [1, 2, 4, 4], 'unet_num_heads': 8, 'unet_head_dim': 64, 'unet_res_blocks': 2, 'unet_attn_scales': [1, 0.5, 0.25], 'unet_dropout': 0.1, 'temporal_attention': 'True', 'num_timesteps': 1000, 'mean_type': 'eps', 'var_type': 'fixed_small', 'loss_type': 'mse'}}, pipeline={'type': 'latent-text-to-video-synthesis'})
Traceback (most recent call last):
  File "/Users/user1/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/t2v_helpers/render.py", line 30, in run
    vids_pack = process_modelscope(args_dict, args)
  File "/Users/user1/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/modelscope/process_modelscope.py", line 65, in process_modelscope
    pipe = setup_pipeline(args.model)
  File "/Users/user1/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/modelscope/process_modelscope.py", line 31, in setup_pipeline
    return TextToVideoSynthesis(get_model_location(model_name))
  File "/Users/user1/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/modelscope/t2v_pipeline.py", line 113, in __init__
    self.diffusion = Txt2VideoSampler(self.sd_model, shared.device, betas=betas)
  File "/Users/user1/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/samplers/samplers_common.py", line 102, in __init__
    self.sampler = self.get_sampler(sampler_name, betas=self.betas)
  File "/Users/user1/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/samplers/samplers_common.py", line 152, in get_sampler
    sampler = Sampler.init_sampler(self.sd_model, betas=betas, device=self.device)
  File "/Users/user1/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/samplers/samplers_common.py", line 87, in init_sampler
    return self.Sampler(sd_model, betas=betas, **kwargs)
  File "/Users/user1/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/samplers/uni_pc/sampler.py", line 12, in __init__
    self.register_buffer('alphas_cumprod', to_torch(model.alphas_cumprod))
  File "/Users/user1/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/samplers/uni_pc/sampler.py", line 17, in register_buffer
    attr = attr.to(torch.device("cuda"))
  File "/Users/user1/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 239, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
Exception occurred: Torch not compiled with CUDA enabled

Additional information

No response

@derodz derodz added the bug Something isn't working label Aug 16, 2023
@kabachuha
Copy link
Owner

Are you able to use normal webui's functions like text2image?

@kabachuha
Copy link
Owner

If you're running on CPU, try selecting CPU for that selection above as well

@derodz
Copy link
Author

derodz commented Aug 17, 2023

Are you able to use normal webui's functions like text2image?

Yes, both txt2img and img2img work great

@derodz
Copy link
Author

derodz commented Aug 17, 2023

If you're running on CPU, try selecting CPU for that selection above as well

What commands do I run? I tried the following and I received the same error:

--use-cpu all --precision full --no-half --skip-torch-cuda-test

@kabachuha
Copy link
Owner

Oh, well then. I recall there are unconditional 'cuda' references in the code, so I'll need to look through them

@derodz
Copy link
Author

derodz commented Aug 19, 2023

Oh, well then. I recall there are unconditional 'cuda' references in the code, so I'll need to look through them

I tried to modify the following line:
File "/Users/user1/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/samplers/uni_pc/sampler.py", line 17, in register_buffer attr = attr.to(torch.device("cuda"))

to

attr = attr.to(torch.device("mps"))
and
attr = attr.to(torch.device("cpu"))

and instead I get the following error

Traceback (most recent call last): File "/Users/user1/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/t2v_helpers/render.py", line 30, in run vids_pack = process_modelscope(args_dict, args) File "/Users/user1/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/modelscope/process_modelscope.py", line 220, in process_modelscope samples, _ = pipe.infer(args.prompt, args.n_prompt, args.steps, args.frames, args.seed + batch if args.seed != -1 else -1, args.cfg_scale, File "/Users/user1/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/modelscope/t2v_pipeline.py", line 275, in infer self.sd_model.to(self.device) File "/Users/user1/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1145, in to return self._apply(convert) File "/Users/user1/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 844, in _apply self._buffers[key] = fn(buf) File "/Users/user1/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1143, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead. Exception occurred: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.

@mengxun
Copy link

mengxun commented Nov 2, 2023

Same error

@peterschmidler
Copy link

Same error on M2

@ManuelW77
Copy link

mps didn't work with float64 as described here DLR-RM/stable-baselines3#914

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants