Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update GPT4All/llama.cpp #25

Open
saul-jb opened this issue Sep 6, 2023 · 1 comment
Open

Update GPT4All/llama.cpp #25

saul-jb opened this issue Sep 6, 2023 · 1 comment

Comments

@saul-jb
Copy link

saul-jb commented Sep 6, 2023

GPT4All uses a newer version of llama.cpp which can handle the new ggml formats. Currently this throws an error similar to the following if you attempt to load a model of a newer version:

error loading model: unknown (magic, version) combination: 67676a74, 00000002; is this really a GGML file?
@kuvaus
Copy link
Owner

kuvaus commented Sep 8, 2023

Thanks for this!

I looked it up and tried to build the new gpt4all-backend.

From quick look it seems it only supports dynamic linking unlike this project. There's a good reason for dynamic linking because then one does not have to have separate builds for avx1 and avx2. And it supports multiple llama versions at the same time. But that also means the binary cannot be compiled on one machine and just trust it works on another.

With this project one is now unfortunately stuck with the old format.

It would be good to leave this issue open so that people know that it does not work with the new ggml formats.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants