-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ggml-model-q8_0_v2使用ollama部署出错 #33
Comments
|
ollama按文档要求是v0.1.33, 就是今天看更新了v2然后下了模型去部署, 上次没出现这个问题, 一样的环境, 我又部署了一次上次的版本算v1吧还是可以部署成功的. 只是v2就是这样. |
在colab上试了一下,模型文件并没有问题。你检查一下模型下载是否完整(比如比对sha256值)? |
好的感谢. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration. |
Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance. |
提交前必须检查以下项目
问题类型
模型量化和部署
基础模型
Llama-3-Chinese-8B-Instruct(指令模型)
操作系统
Windows
详细描述问题
Modelfile文件:
依赖情况(代码类问题务必提供)
运行日志或截图
The text was updated successfully, but these errors were encountered: