Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MTL][Qwen] model fail RuntimeError: "normal_kernel_cpu" not implemented for 'Byte' #10826

Closed
juan-OY opened this issue Apr 22, 2024 · 4 comments
Assignees

Comments

@juan-OY
Copy link

juan-OY commented Apr 22, 2024

Upgrading to ipex-llm from bigdl-llm, meet below issue, besides that we also meet accuracy issue.

2024-04-21 21:35:27,923 - INFO - Converting the current model to sym_int4 format......
Traceback (most recent call last):
File "C:\Users\test\Documents\qwen_validate\ultra_test_code_and_data\benchmark_test2intel\gen_prediction.py", line 78, in
model = AutoModelForCausalLM.load_low_bit(model_path, trust_remote_code=True, optimize_model=True).eval()
File "C:\Users\test\Documents\qwen_validate\ultra_test_code_and_data\env\lib\site-packages\ipex_llm\transformers\model.py", line 657, in load_low_bit
) = model_class._load_pretrained_model(
File "C:\Users\test\Documents\qwen_validate\ultra_test_code_and_data\env\lib\site-packages\transformers\modeling_utils.py", line 3125, in _load_pretrained_model
model.apply(model._initialize_weights)
File "C:\Users\test\Documents\qwen_validate\ultra_test_code_and_data\env\lib\site-packages\torch\nn\modules\module.py", line 897, in apply
module.apply(fn)
File "C:\Users\test\Documents\qwen_validate\ultra_test_code_and_data\env\lib\site-packages\torch\nn\modules\module.py", line 897, in apply
module.apply(fn)
File "C:\Users\test\Documents\qwen_validate\ultra_test_code_and_data\env\lib\site-packages\torch\nn\modules\module.py", line 897, in apply
module.apply(fn)
[Previous line repeated 2 more times]
File "C:\Users\test\Documents\qwen_validate\ultra_test_code_and_data\env\lib\site-packages\torch\nn\modules\module.py", line 898, in apply
fn(self)
File "C:\Users\test\Documents\qwen_validate\ultra_test_code_and_data\env\lib\site-packages\transformers\modeling_utils.py", line 1261, in _initialize_weights
self._init_weights(module)
File "C:\Users\test/.cache\huggingface\modules\transformers_modules\us_qwen_0435_r2-sym_int4\modeling_qwen.py", line 697, in init_weights
module.weight.data.normal
(mean=0.0, std=self.config.initializer_range)
RuntimeError: "normal_kernel_cpu" not implemented for 'Byte'

@leonardozcm leonardozcm self-assigned this Apr 22, 2024
@leonardozcm
Copy link
Contributor

Sorry that for ipex-llm==2.5.0b20240421 && bigdl-core-xe-21 == 2.5.0b20240421, I can't reproduce this this issue on both arc and mtl(at least load_low_bit works fine)

(sgwhat-llm) D:\jinqiao\ipex-llm\python\llm\example\GPU\HF-Transformers-AutoModels\Model\qwen>python generate.py        
bin C:\Users\arda\miniconda3\envs\sgwhat-llm\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.so
C:\Users\arda\miniconda3\envs\sgwhat-llm\lib\site-packages\bitsandbytes\cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
  warn("The installed version of bitsandbytes was compiled without GPU support. "
function 'cadam32bit_grad_fp32' not found
C:\Users\arda\miniconda3\envs\sgwhat-llm\lib\site-packages\torchvision\io\image.py:13: UserWarning: Failed to load image Python extension: 'Could not find module 'C:\Users\arda\miniconda3\envs\sgwhat-llm\Lib\site-packages\torchvision\image.pyd' (or one of its dependencies). Try using the full path with constructor syntax.'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
  warn(
2024-04-22 13:48:15,416 - INFO - intel_extension_for_pytorch auto imported
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:00<00:00,  9.34it/s] 
2024-04-22 13:48:16,649 - INFO - Converting the current model to sym_int4 format......
-------------------- Prompt --------------------

<|im_start|>system
You are a helpful assistant.
<|im_end|>
<|im_start|>user
AI是什么?
<|im_end|>
<|im_start|>assistant

-------------------- Output --------------------

system
You are a helpful assistant.

user
AI是什么?

assistant
AI(人工智能)是指由计算机系统模拟、延伸和扩展人类智能的一门技术。它的目标是使计算机系统具有学习能力、推理能力和决策

@leonardozcm
Copy link
Contributor

With passing fast_init=False, the model could be normally loaded.

model = AutoModelForCausalLM.load_low_bit(model_path, _fast_init=False, trust_remote_code=True, optimize_model=True).eval()

@jason-dai
Copy link
Contributor

With passing fast_init=False, the model could be normally loaded.

model = AutoModelForCausalLM.load_low_bit(model_path, _fast_init=False, trust_remote_code=True, optimize_model=True).eval()

Shall we update our save/load example to explicitly add this parameter?

@leonardozcm
Copy link
Contributor

Shall we update our save/load example to explicitly add this parameter?

Yes and I think we can do this in our load_low_bit API(so that our users will not be aware of the sight change and all examples will be ). For all saved low bit weights are already processed and need no initialized, this will not effect our function now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants