Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

--model_name_or_path', '../THUDM/chatglm-6b'参数什么错误 #126

Open
daiyucan opened this issue Jan 2, 2024 · 1 comment
Open

--model_name_or_path', '../THUDM/chatglm-6b'参数什么错误 #126

daiyucan opened this issue Jan 2, 2024 · 1 comment

Comments

@daiyucan
Copy link

daiyucan commented Jan 2, 2024

Traceback (most recent call last):
File "/root/ChatGLM-Finetuning/train.py", line 234, in
main()
File "/root/ChatGLM-Finetuning/train.py", line 96, in main
tokenizer = MODE[args.mode]["tokenizer"].from_pretrained(args.model_name_or_path)
File "/opt/conda/envs/pytorch1.8/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1813, in from_pretrained
resolved_vocab_files[file_id] = cached_file(
File "/opt/conda/envs/pytorch1.8/lib/python3.9/site-packages/transformers/utils/hub.py", line 429, in cached_file
resolved_file = hf_hub_download(
File "/opt/conda/envs/pytorch1.8/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 110, in _inner_fn
validate_repo_id(arg_value)
File "/opt/conda/envs/pytorch1.8/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 158, in validate_repo_id
raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '../THUDM/chatglm-6b'. Use repo_type argument if needed.
[2024-01-02 20:35:17,751] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 5491
[2024-01-02 20:35:17,752] [ERROR] [launch.py:321:sigkill_handler] ['/opt/conda/envs/pytorch1.8/bin/python3.9', '-u', 'train.py', '--local_rank=0', '--train_path', 'data/spo_0.json', '--model_name_or_path', '../THUDM/chatglm-6b', '--per_device_train_batch_size', '1', '--max_len', '1560', '--max_src_len', '1024', '--learning_rate', '1e-4', '--weight_decay', '0.1', '--num_train_epochs', '2', '--gradient_accumulation_steps', '4', '--warmup_ratio', '0.1', '--mode', 'glm', '--train_type', 'freeze', '--freeze_module_name', 'layers.27.,layers.26.,layers.25.,layers.24.', '--seed', '1234', '--ds_file', 'ds_zero2_no_offload.json', '--gradient_checkpointing', '--show_loss_step', '10', '--output_dir', './output-glm'] exits with return code = 1

@liucongg
Copy link
Owner

liucongg commented Jan 7, 2024

预训练模型路径不对,我这里需要将写本地模型的路径,“./THUDM/chatglm-6b”这个路径应该在你本地没有。
如果你写transformer的repo_id你应该写,“THUDM/chatglm-6b”吧

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants