Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to load model for eachadea/ggml-vicuna-7b-1.1 #15

Open
fakechris opened this issue Apr 16, 2023 · 2 comments
Open

Failed to load model for eachadea/ggml-vicuna-7b-1.1 #15

fakechris opened this issue Apr 16, 2023 · 2 comments
Labels
bug Something isn't working
Milestone

Comments

@fakechris
Copy link

After I downloaded eachadea/ggml-vicuna-7b-1.1's ggml-vicuna-7b-1.1-q4_0.bin
model from https://huggingface.co/eachadea/ggml-vicuna-7b-1.1/tree/main, I was able to add Chat Source successfully.
However, during the conversation, an error "Failed to load model" occurred.
I also tried llama.cpp, and I could load the model only after updating to the latest llama.cpp. The llama.cpp from 5 days ago would also fail to load the model. I'm not sure if the ggml model in llama.cpp has been modified in any way.

@alexrozanski alexrozanski added the bug Something isn't working label Apr 17, 2023
@alexrozanski
Copy link
Owner

Hey @fakechris, I know there have been some changes to llama.cpp in the last week, I'm working on updating the bindings so that these are now supported. I haven't tested Vicuna support specifically either, and that's coming

@alexrozanski alexrozanski added this to the v1.3 milestone Apr 17, 2023
@zakkor
Copy link

zakkor commented Apr 22, 2023

Vicuna works with the same sort of parameters as plain llama, but requires the "User:" prompt to be used AFAIK

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants