Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does it support Qwen1.5 Model? #78

Open
kicGit opened this issue Feb 8, 2024 · 8 comments
Open

Does it support Qwen1.5 Model? #78

kicGit opened this issue Feb 8, 2024 · 8 comments

Comments

@kicGit
Copy link

kicGit commented Feb 8, 2024

No description provided.

@wanshichenguang
Copy link

同问,llama.cpp的转换脚本好像不能正常转化

@sweetcard
Copy link

可以试试ollama,一条命令即可体验。

@kicGit
Copy link
Author

kicGit commented Feb 19, 2024

可以试试ollama,一条命令即可体验。

你测试的效果如何?我用ollama测试qwen1.5-0.5b,效果极差

@sweetcard
Copy link

可以试试ollama,一条命令即可体验。

你测试的效果如何?我用ollama测试qwen1.5-0.5b,效果极差

0.5b太小了。目前要做到效果好,很难的

@anaivebird
Copy link

同问,llama.cpp的转换脚本好像不能正常转化

老哥解决了吗

@YiandLi
Copy link

YiandLi commented Feb 24, 2024

huggingface 上有 gguf 的版本,可以直接用。

https://huggingface.co/Qwen/Qwen1.5-1.8B-Chat-GGUF

@anaivebird
Copy link

huggingface 上有 gguf 的版本,可以直接用。

https://huggingface.co/Qwen/Qwen1.5-1.8B-Chat-GGUF

如果是自己微调过的版本,怎么转换gguf呢

@helloHKTK
Copy link

huggingface 上有 gguf 的版本,可以直接用。
https://huggingface.co/Qwen/Qwen1.5-1.8B-Chat-GGUF

如果是自己微调过的版本,怎么转换gguf呢

老哥,自己微调过的模型转换成gguf的问题解决了吗?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants