Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FIXED] NotImplementedError: No operator found for memory_efficient_attention_forward with inputs #400

Open
rantianhua opened this issue Apr 30, 2024 · 4 comments
Labels
fixed Fixed! URGENT BUG Urgent bug

Comments

@rantianhua
Copy link

I'm a beginner to try unsloth. I run the free notebook Llama 3 (8B), and then got the following error:

Screenshot 2024-04-30 at 17 18 16

I also encountered the following error during the first installing step:

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
xformers 0.0.26.post1 requires torch==2.3.0, but you have torch 2.2.1+cu121 which is incompatible.
@tariksghiouri
Copy link

facing the same issue , I have no idea why.
the cell was running just a couple of days ago.

@jamiehughes5926
Copy link

facing the same issue

@danielhanchen danielhanchen added URGENT BUG Urgent bug fixed Fixed! labels Apr 30, 2024
@danielhanchen danielhanchen pinned this issue Apr 30, 2024
@danielhanchen
Copy link
Contributor

@rantianhua @tariksghiouri @jamiehughes5926 Apologies on the issue! I should have wrote it here - please update the first cell's installatation instructions from

%%capture
import torch
major_version, minor_version = torch.cuda.get_device_capability()
# Must install separately since Colab has torch 2.2.1, which breaks packages
!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
if major_version >= 8:
    # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)
    !pip install --no-deps packaging ninja einops flash-attn xformers trl peft accelerate bitsandbytes
else:
    # Use this for older GPUs (V100, Tesla T4, RTX 20xx)
    !pip install --no-deps xformers trl peft accelerate bitsandbytes
pass

to

%%capture
# Installs Unsloth, Xformers (Flash Attention) and all other packages!
!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
!pip install --no-deps "xformers<0.0.26" trl peft accelerate bitsandbytes

@danielhanchen danielhanchen changed the title NotImplementedError: No operator found for memory_efficient_attention_forward with inputs [FIXED] NotImplementedError: No operator found for memory_efficient_attention_forward with inputs Apr 30, 2024
@tariksghiouri
Copy link

thank you Daniel :)

@danielhanchen danielhanchen unpinned this issue May 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
fixed Fixed! URGENT BUG Urgent bug
Projects
None yet
Development

No branches or pull requests

4 participants