Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add mps support #22

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open

feat: add mps support #22

wants to merge 5 commits into from

Conversation

Fodark
Copy link
Contributor

@Fodark Fodark commented Mar 13, 2024

Detect the platform where the model is loaded and adjust torch.device and torch.dtype appropriately.
I was able to run the model on an M1 Macbook Pro (with poor performance at the moment).

@AbeEstrada
Copy link

Slow as mentioned, but it works

Screenshot 2024-03-14 at 9 59 28 AM Small

@Benjamin-eecs Benjamin-eecs linked an issue Mar 15, 2024 that may be closed by this pull request
@mattkanwisher
Copy link

mattkanwisher commented Mar 17, 2024

NotImplementedError: The operator 'aten::_upsample_bilinear2d_aa.out' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on pytorch/pytorch#77764. As a temporary fix, you can set the environment variable PYTORCH_ENABLE_MPS_FALLBACK=1 to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.

And once I enable that I get this error
RuntimeError: User specified an unsupported autocast device_type 'mps'


Edit: Ok it works if you clear your python env and downgrade the deps, just noticed in PR .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support for M1 Mac, or non-cuda devices
3 participants