Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]在model文件中没有找到关于Qwen1.5-32b-chat的配置文件 #1057

Open
1 task
Egber1t opened this issue Apr 18, 2024 · 2 comments
Open
1 task
Assignees

Comments

@Egber1t
Copy link

Egber1t commented Apr 18, 2024

Describe the feature

只有7b,14b,72b。这个怎么办呢?

Will you implement it?

  • I would like to implement this feature and create a PR!
@Egber1t Egber1t changed the title [Feature]在model文件中没有找到关于Qwen1.5-30b-chat的配置文件 [Feature]在model文件中没有找到关于Qwen1.5-32b-chat的配置文件 Apr 18, 2024
@tonysy
Copy link
Collaborator

tonysy commented Apr 18, 2024

You can change the path with 32b-chat, and we will update the new model config soon.

@Egber1t
Copy link
Author

Egber1t commented Apr 18, 2024

好的谢谢!我自己尝试把微调后32b的qwen模型转成hf格式,然后运行命令如下python run.py --datasets cmmlu_gen
--hf-path /root/autodl-tmp/Qwen1.5-32B-Chat
--model-kwargs device_map='auto'
--tokenizer-kwargs padding_side='left' truncation='left' trust_remote_code=True
--max-seq-len 300
--max-out-len 5
--batch-size 8
--num-gpus 1
。请问我还需要添加其他参数设置吗?比如模板什么的?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants