New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The Quick Start Code cannot be executed in mPLUG-Owl2 #195
Comments
Update your transformer library. |
I updated to the latest version (transformers==4.36.2) but still have the problem. |
I solved the problem by using transformers==4.32.0. |
For the same snippet I got the following error:
I have the following transformer version : Later I upgraded it to 4.32.0 as suggested, but error persists. |
Any one was able to fix this? |
hello, you can change to |
Yes, this issue is incorporated by the mPLUG-Owl2.1 which disables the cls_token in visual encoder. We fixed this issue in the latest commit. |
Yes, i ran last week too by turning off the cls_token check. Glad that it is now officially handled too! |
When I run the following code:
The code from the Quick Start Code this page:
https://github.com/X-PLUG/mPLUG-Owl/tree/main/mPLUG-Owl2
I got the following feedback:
Traceback (most recent call last):
File "/home/kkk/.pycharm_helpers/pydev/_pydevd_bundle/pydevd_exec2.py", line 3, in Exec
exec(exp, global_vars, local_vars)
File "", line 1, in
File "/home/kkk/DB/libs/mPLUGOwl/mPLUG-Owl2/mplug_owl2/model/builder.py", line 106, in load_pretrained_model
model = AutoModelForCausalLM.from_pretrained(model_path, low_cpu_mem_usage=True, **kwargs)
File "/data/kkk/anaconda3/envs/va/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 566, in from_pretrained
return model_class.from_pretrained(
File "/data/kkk/anaconda3/envs/va/lib/python3.9/site-packages/transformers/modeling_utils.py", line 3450, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/home/kkk/DB/libs/mPLUGOwl/mPLUG-Owl2/mplug_owl2/model/modeling_mplug_owl2.py", line 209, in init
self.model = MPLUGOwl2LlamaModel(config)
File "/home/kkk/DB/libs/mPLUGOwl/mPLUG-Owl2/mplug_owl2/model/modeling_mplug_owl2.py", line 201, in init
super(MPLUGOwl2LlamaModel, self).init(config)
File "/home/kkk/DB/libs/mPLUGOwl/mPLUG-Owl2/mplug_owl2/model/modeling_mplug_owl2.py", line 33, in init
super(MPLUGOwl2MetaModel, self).init(config)
File "/data/kkk/anaconda3/envs/va/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 949, in init
[LlamaDecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
File "/data/kkk/anaconda3/envs/va/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 949, in
[LlamaDecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
TypeError: init() takes 2 positional arguments but 3 were given
The text was updated successfully, but these errors were encountered: