Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MacOS Support #35

Open
yuriten opened this issue Sep 6, 2022 · 11 comments
Open

MacOS Support #35

yuriten opened this issue Sep 6, 2022 · 11 comments
Labels
enhancement New feature or request help wanted Extra attention is needed

Comments

@yuriten
Copy link

yuriten commented Sep 6, 2022

Steps to Reproduce

  1. clone the repo
  2. copy model.ckpt to models
  3. cd ./AUTOMATIC1111
  4. docker compose up --build

There are no problems with the previous steps, but the last error is reported:

Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
[+] Running 2/2
 ⠿ Network automatic1111_default    Created                                                                                                 0.0s
 ⠿ Container automatic1111-model-1  Created                                                                                                 0.1s
Attaching to automatic1111-model-1
Error response from daemon: could not select device driver "nvidia" with capabilities: [[gpu]]

Hardware / Software:

  • MacOS 12.5 M1 Pro chip
  • GPU: No GPU

What do I need to do to fix this?

@yuriten yuriten added the bug Something isn't working label Sep 6, 2022
@AbdBarho
Copy link
Owner

AbdBarho commented Sep 6, 2022

@yuriten Mac Support is a big subject. I know that lstein fork has mac support, but I cannot integrate it because I don't have a mac.

If you or anyone would like to contribute and implement it I will gladly add it to the repo.

related #31

@AbdBarho AbdBarho added awaiting-response Waiting for the issuer to respond enhancement New feature or request and removed bug Something isn't working awaiting-response Waiting for the issuer to respond labels Sep 6, 2022
@AbdBarho AbdBarho changed the title MacOS M1Pro chip: Error response from daemon: could not select device driver "nvidia" with capabilities: [[gpu]] MacOS Support Sep 11, 2022
@jasalt
Copy link

jasalt commented Sep 13, 2022

There's also SD in app format for Mac (Apple Silicon) now: https://github.com/divamgupta/diffusionbee-stable-diffusion-ui. Probably easiest way to get SD running there while the Docker ARM support is lagging behind.

@AbdBarho AbdBarho added the help wanted Extra attention is needed label Sep 16, 2022
@AdamGoyer
Copy link

gm,
Checking in briefly. Has any progress been made on Apple M1 Silicon since Sept 2022?

@jasalt
Copy link

jasalt commented Jul 11, 2023

@AdamGoyer Running the AUTOMATIC1111 UI without Docker following the wiki docs has been easiest and most feature rich way to run SD Web UI on M1 Mac from what I have tried out. Works quite fine with 16GB ram model at least. It uses Python machine learning packages customized for Mac hardware.

GPU passthrough for (Docker) VM is not available there and might take a while to happen.

@AdamGoyer
Copy link

Thank you for the follow-up @jasalt ,
I put up a $50 Bounty on Replet for someone to work on it. Maybe we'll get lucky. :-)
https://replit.com/bounties/@adamgoyer/automatic-1111-docke
I'd like to test the Docked version if possible because, for my day job, we're building a distributed network called which uses docker containers as a key component. Hopefully, this will be a fun challenge for someone.

@jasalt
Copy link

jasalt commented Jul 11, 2023

@AdamGoyer Haha, nice. It would be cool for sure but I wonder if it's even possible without Docker adding GPU passthrough like WSL2 which only Parallels and virgl seem to be able to do in some extent (ref https://apple.stackexchange.com/a/453103/29042). Or maybe use some tricks to use the PyTorch etc. from Mac side..

@basavaraja-v
Copy link

basavaraja-v commented Jul 12, 2023

@yuriten you need to remove the nvidia and gpu related dependencies in the https://github.com/AbdBarho/stable-diffusion-webui-docker/blob/master/docker-compose.yml since your machine dont have GPU as below.

version: '3.9'

x-base_service: &base_service
ports:
- "${WEBUI_PORT:-7860}:7860"
volumes:
- &v1 ./data:/data
- &v2 ./output:/output
stop_signal: SIGKILL
tty: true
deploy:
resources:
reservations:
devices: []

name: webui-docker

services:
download:
build: ./services/download/
profiles: ["download"]
volumes:
- *v1

invoke: &invoke
<<: *base_service
profiles: ["invoke"]
build: ./services/invoke/
image: sd-invoke:30
environment:
- PRELOAD=true
- CLI_ARGS=--xformers

comfy: &comfy
<<: *base_service
profiles: ["comfy"]
build: ./services/comfy/
image: sd-comfy:3
environment:
- CLI_ARGS=--cpu

comfy-cpu:
<<: *comfy
profiles: ["comfy-cpu"]
deploy: {}
environment:
- CLI_ARGS=--cpu

@Ntsako-Hlungwani
Copy link

Docker Compose Configuration:
Check your Docker Compose configuration to make sure that you have specified the correct runtime and options for NVIDIA GPUs. Here is an example snippet you might include in your docker-compose.yml file:
services:
automatic1111-model-1:
runtime: nvidia
environment:
- NVIDIA_VISIBLE_DEVICES=all
# ... other configurations ...

@cmutnik
Copy link

cmutnik commented Mar 23, 2024

@jasalt I made a docker image that spinds up locally (mac m3) and launches the AUTOMATIC1111/stable-diffusion-webui through localhost:7860

I got it to work using the vscode method from think link but that only worked in vscode cuz gradio is so fun. So I hacked together an image you can build and run locally, without requiring vscode; just needs the simple commands:

docker build -t sdwebui:lite -f Dockerfile .
docker run -it -p 7860:7860 sdwebui:lite

This has been tested and works, but the Dockerfile is a mess haha. Deff not something good enough to submit for a PR yet. Whenever I have the time Ill clean it up and put in a PR, or anyone else who wants to make it prettier/remove duplicate logic can. Until then, Im currently in the process of pushing it to docker hub:

docker image tag sdwebui:lite letsstartdocking/sdwebui:lightonmodels
docker push letsstartdocking/sdwebui:lightonmodels

All this to say -- its possible, I have the code; just need to find the time so a PR isn't embarrassingly ugly haha.

sd_webui_mac_m1

@cmutnik
Copy link

cmutnik commented Mar 23, 2024

anyone who wants access to the really bad dockerfile can have it (just no shaming until its cleaned up haha)

@cmutnik cmutnik mentioned this issue Mar 23, 2024
@cmutnik
Copy link

cmutnik commented Mar 23, 2024

PR is open, please see notes/comments for discussion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

7 participants