Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

为什么使用xformers会报错,如何解决 #422

Open
myface-wang opened this issue Apr 28, 2024 · 1 comment
Open

为什么使用xformers会报错,如何解决 #422

myface-wang opened this issue Apr 28, 2024 · 1 comment

Comments

@myface-wang
Copy link

Enable xformers for U-Net
Traceback (most recent call last):
File "C:\Users\24029\Downloads\lora-scripts\sd-scripts\train_network.py", line 996, in
trainer.train(args)
File "C:\Users\24029\Downloads\lora-scripts\sd-scripts\train_network.py", line 242, in train
vae.set_use_memory_efficient_attention_xformers(args.xformers)
File "C:\Users\24029\Downloads\lora-scripts\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 262, in set_use_memory_efficient_attention_xformers
fn_recursive_set_mem_eff(module)
File "C:\Users\24029\Downloads\lora-scripts\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 258, in fn_recursive_set_mem_eff
fn_recursive_set_mem_eff(child)
File "C:\Users\24029\Downloads\lora-scripts\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 258, in fn_recursive_set_mem_eff
fn_recursive_set_mem_eff(child)
File "C:\Users\24029\Downloads\lora-scripts\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 258, in fn_recursive_set_mem_eff
fn_recursive_set_mem_eff(child)
File "C:\Users\24029\Downloads\lora-scripts\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 255, in fn_recursive_set_mem_eff
module.set_use_memory_efficient_attention_xformers(valid, attention_op)
File "C:\Users\24029\Downloads\lora-scripts\venv\lib\site-packages\diffusers\models\attention_processor.py", line 273, in set_use_memory_efficient_attention_xformers
raise e
File "C:\Users\24029\Downloads\lora-scripts\venv\lib\site-packages\diffusers\models\attention_processor.py", line 267, in set_use_memory_efficient_attention_xformers
_ = xformers.ops.memory_efficient_attention(
File "C:\Users\24029\Downloads\lora-scripts\venv\lib\site-packages\xformers\ops\fmha_init_.py", line 223, in memory_efficient_attention
return memory_efficient_attention(
File "C:\Users\24029\Downloads\lora-scripts\venv\lib\site-packages\xformers\ops\fmha_init
.py", line 321, in _memory_efficient_attention
return memory_efficient_attention_forward(
File "C:\Users\24029\Downloads\lora-scripts\venv\lib\site-packages\xformers\ops\fmha_init
.py", line 337, in _memory_efficient_attention_forward
op = _dispatch_fw(inp, False)
File "C:\Users\24029\Downloads\lora-scripts\venv\lib\site-packages\xformers\ops\fmha\dispatch.py", line 120, in _dispatch_fw
return _run_priority_list(
File "C:\Users\24029\Downloads\lora-scripts\venv\lib\site-packages\xformers\ops\fmha\dispatch.py", line 63, in _run_priority_list
raise NotImplementedError(msg)
NotImplementedError: No operator found for memory_efficient_attention_forward with inputs:
query : shape=(1, 2, 1, 40) (torch.float32)
key : shape=(1, 2, 1, 40) (torch.float32)
value : shape=(1, 2, 1, 40) (torch.float32)
attn_bias : <class 'NoneType'>
p : 0.0
decoderF is not supported because:
xFormers wasn't build with CUDA support
attn_bias type is <class 'NoneType'>
operator wasn't built - see python -m xformers.info for more info
flshattF@0.0.0 is not supported because:
xFormers wasn't build with CUDA support
dtype=torch.float32 (supported: {torch.float16, torch.bfloat16})
operator wasn't built - see python -m xformers.info for more info
tritonflashattF is not supported because:
xFormers wasn't build with CUDA support
dtype=torch.float32 (supported: {torch.float16, torch.bfloat16})
operator wasn't built - see python -m xformers.info for more info
triton is not available
cutlassF is not supported because:
xFormers wasn't build with CUDA support
operator wasn't built - see python -m xformers.info for more info
smallkF is not supported because:
max(query.shape[-1] != value.shape[-1]) > 32
xFormers wasn't build with CUDA support
operator wasn't built - see python -m xformers.info for more info
unsupported embed per head: 40
23:55:42-848789 ERROR Training failed / 训练失败

@ChowLiang2000
Copy link

遇到同样问题,No module named 'xformers' ,只能关闭这个选项。3月份训练时还是可以勾选使用xformers的,似乎是更新后出现这个问题

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants