Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

401 Client Error: Unauthorized for url: https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/tokenizer_config.json #43

Open
azuryl opened this issue Dec 19, 2023 · 1 comment

Comments

@azuryl
Copy link

azuryl commented Dec 19, 2023

bash scripts/llama_prune.sh
[START] - Start Pruning Model
Traceback (most recent call last):
File "/home/azuryl/anaconda3/envs/llamaprune/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 270, in hf_raise_for_status
response.raise_for_status()
File "/home/azuryl/anaconda3/envs/llamaprune/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/tokenizer_config.json

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/azuryl/anaconda3/envs/llamaprune/lib/python3.10/site-packages/transformers/utils/hub.py", line 389, in cached_file
resolved_file = hf_hub_download(
File "/home/azuryl/anaconda3/envs/llamaprune/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/home/azuryl/anaconda3/envs/llamaprune/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1374, in hf_hub_download
raise head_call_error
File "/home/azuryl/anaconda3/envs/llamaprune/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1247, in hf_hub_download
metadata = get_hf_file_metadata(
File "/home/azuryl/anaconda3/envs/llamaprune/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/home/azuryl/anaconda3/envs/llamaprune/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1624, in get_hf_file_metadata
r = _request_wrapper(
File "/home/azuryl/anaconda3/envs/llamaprune/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 402, in _request_wrapper
response = _request_wrapper(
File "/home/azuryl/anaconda3/envs/llamaprune/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 426, in _request_wrapper
hf_raise_for_status(response)
File "/home/azuryl/anaconda3/envs/llamaprune/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 320, in hf_raise_for_status
raise RepositoryNotFoundError(message, response) from e
huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-65813e67-47dd0288795e66057f2cb0d0;16159d73-3032-4e61-8a3f-18bc009609a8)

Repository Not Found for url: https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/tokenizer_config.json.
Please make sure you specified the correct repo_id and repo_type.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/azuryl/project/LLM-Pruner/hf_prune.py", line 314, in
main(args)
File "/home/azuryl/project/LLM-Pruner/hf_prune.py", line 39, in main
tokenizer = LlamaTokenizer.from_pretrained(args.base_model)
File "/home/azuryl/anaconda3/envs/llamaprune/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1951, in from_pretrained
resolved_config_file = cached_file(
File "/home/azuryl/anaconda3/envs/llamaprune/lib/python3.10/site-packages/transformers/utils/hub.py", line 410, in cached_file
raise EnvironmentError(
OSError: decapoda-research/llama-7b-hf is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with huggingface-cli login or by passing token=<your_token>
[FINISH] - Finish Pruning Model
[START] - Start Tuning
Traceback (most recent call last):
File "/home/azuryl/project/LLM-Pruner/post_training.py", line 262, in
main(args)
File "/home/azuryl/project/LLM-Pruner/post_training.py", line 33, in main
pruned_dict = torch.load(args.prune_model, map_location='cpu')
File "/home/azuryl/anaconda3/envs/llamaprune/lib/python3.10/site-packages/torch/serialization.py", line 986, in load
with _open_file_like(f, 'rb') as opened_file:
File "/home/azuryl/anaconda3/envs/llamaprune/lib/python3.10/site-packages/torch/serialization.py", line 435, in _open_file_like
return _open_file(name_or_buffer, mode)
File "/home/azuryl/anaconda3/envs/llamaprune/lib/python3.10/site-packages/torch/serialization.py", line 416, in init
super().init(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: 'prune_log/llama_prune/pytorch_model.bin'
[FINISH] - Finish Prune and Post-Training.
[INFO] - The pruned model is at {prune_log/llama_prune/pytorch_model.bin}, and the recovery weight is at {tune_log/llama_0.2}/
You can use the command:
python generate.py --model_type tune_prune_LLM --ckpt prune_log/llama_prune/pytorch_model.bin --lora_ckpt tune_log/llama_0.2
to use the pruned model

@WilliamYi96
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants