Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to merge reward adapter into model #5

Open
DavidFarago opened this issue May 12, 2023 · 2 comments
Open

Unable to merge reward adapter into model #5

DavidFarago opened this issue May 12, 2023 · 2 comments

Comments

@DavidFarago
Copy link

Calling python merge_peft_adapter.py --model_name ./reward_model_vicuna-7b
yields

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please run

python -m bitsandbytes

 and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
bin /home/paperspace/anaconda3/envs/vic310/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda117.so
/home/paperspace/anaconda3/envs/vic310/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: /home/paperspace/anaconda3/envs/vic310 did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths...
  warn(msg)
CUDA SETUP: CUDA runtime path found: /usr/local/cuda-11.7/lib64/libcudart.so.11.0
CUDA SETUP: Highest compute capability among GPUs detected: 8.6
CUDA SETUP: Detected CUDA version 117
CUDA SETUP: Loading binary /home/paperspace/anaconda3/envs/vic310/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda117.so...
script_args:  ScriptArguments(model_name='./reward_model_vicuna-7b', output_name=None)
Traceback (most recent call last):
  File "/home/paperspace/anaconda3/envs/vic310/lib/python3.10/site-packages/peft/utils/config.py", line 106, in from_pretrained
    config_file = hf_hub_download(pretrained_model_name_or_path, CONFIG_NAME, subfolder=subfolder)
  File "/home/paperspace/anaconda3/envs/vic310/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 112, in _inner_fn
    validate_repo_id(arg_value)
  File "/home/paperspace/anaconda3/envs/vic310/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 166, in validate_repo_id
    raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: './reward_model_vicuna-7b'.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/paperspace/Vicuna-LoRA-RLHF-PyTorch/merge_peft_adapter.py", line 33, in <module>
    peft_config = PeftConfig.from_pretrained(peft_model_id)
  File "/home/paperspace/anaconda3/envs/vic310/lib/python3.10/site-packages/peft/utils/config.py", line 108, in from_pretrained
    raise ValueError(f"Can't find '{CONFIG_NAME}' at '{pretrained_model_name_or_path}'")
ValueError: Can't find 'adapter_config.json' at './reward_model_vicuna-7b'

When calling python -m bitsandbytes, I do get SUCCESS! Installation was successful!.

When calling python merge_peft_adapter.py --model_name ./reward_model_vicuna-7b_100_2e-05/, I get the error

key:  
Traceback (most recent call last):
  File "/home/paperspace/anaconda3/envs/vic310/lib/python3.10/site-packages/peft/tuners/lora.py", line 278, in __getattr__
    return super().__getattr__(name)  # defer to nn.Module's logic
  File "/home/paperspace/anaconda3/envs/vic310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1614, in __getattr__
    raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'LoraModel' object has no attribute '_get_submodules'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/paperspace/Vicuna-LoRA-RLHF-PyTorch/merge_peft_adapter.py", line 60, in <module>
    parent, target, target_name = model.base_model._get_submodules(key) 
  File "/home/paperspace/anaconda3/envs/vic310/lib/python3.10/site-packages/peft/tuners/lora.py", line 280, in __getattr__
    return getattr(self.model, name)
  File "/home/paperspace/anaconda3/envs/vic310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1614, in __getattr__
    raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'LlamaForCausalLM' object has no attribute '_get_submodules'. Did you mean: 'get_submodule'?

after loading checkpoint shards.

@Tejaswi-kashyap-006
Copy link

I'm having the same issue

@jackaduma
Copy link
Owner

  1. ValueError: Can't find 'adapter_config.json' at './reward_model_vicuna-7b'.

Please make sure your input model path is correct

  1. 'LlamaForCausalLM' object has no attribute '_get_submodules'. Did you mean: 'get_submodule'?

According to my experience, the probability is that the problem is due to the version of peft, please take a look at /peft/tuners/lora.py

The current installation from git is version 0.3.0.dev0, there is a problem in merge_peft_adapter, need to switch to peft==0.2.0 (0.3.0.dev0 does not have _get_submodules() this function)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants