-
Notifications
You must be signed in to change notification settings - Fork 243
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] llama3和Chinese-LLaMA-Alpaca-3 无论是pipeline方式,还是lmdeploy chat方式都无法生成结果(llama2都是好的) #1538
Comments
Have you tried the official llama3-8b-instruct? |
我下载的应该是modelscope官方的。https://www.modelscope.cn/models/LLM-Research/Meta-Llama-3-8B-Instruct/summary 这个链接就是官方认证的。 |
I cannot reproduce your issue |
Please pass |
我从huggingface官网下载了 权重文件。 ··· os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com' command_str = 'huggingface-cli download --token hf_Bxxx --resume-download meta-llama/Meta-Llama-3-8B-Instruct --local-dir ' + os.environ.get('HOME') + '/models/meta-llama/Meta-Llama-3-8B-Instruct1' os.system(command_str) 进入lmdeploy 0.4.0环境 |
修改 start_meta_llama3.py
|
我是在WSL环境的。个人怀疑是WSL环境中 lmdeploy转换有问题。 |
@irexyc may follow up this issue |
@lzhangzz FYI |
你好,我也遇到这个问题了,请问解决了吗 |
请问你也是用的wsl环境么? |
我用的是windows环境 |
裸机上吗? |
是的,直接裸机跑,llama2是没问题的,4080显卡 |
Environment
TorchVision: 0.17.2+cu121 |
@zhanghui-china 麻烦看下是不是在你这边也是类似的问题? |
@liaoduoduo 你的 4080 显卡,内存是多大的? |
@lvhan028 是16GB的 |
16G是不够的。LMDeploy按理说要报 OOM 才对。 |
Checklist
Describe the bug
start_chinese_llama3.py
start_meta_llama3.py
start_chinese_llama2.py
start_meta_llama2.py
Reproduction
llama3代
python start_chinese_llama3.py
python start_meta_llama3.py
llama2代
python start_chinese_llama2.py
python start_meta_llama2.py
Environment
Error traceback
No response
The text was updated successfully, but these errors were encountered: