Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

backend: do not crash if GGUF lacks general.architecture #2346

Merged
merged 1 commit into from
May 15, 2024

Conversation

cebtenzzre
Copy link
Member

Fix a crash when trying to start GPT4All with this non-llama.cpp GGUF installed.

Also, fix a crash when trying to load this model. Now it will display an "Unsupported file format" error in the UI.

Fixes #2292

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
@cebtenzzre cebtenzzre requested a review from manyoso May 14, 2024 21:33
@@ -786,12 +786,14 @@ const std::vector<LLModel::Token> &GPTJ::endTokens() const
}

const char *get_arch_name(gguf_context *ctx_gguf) {
std::string arch_name;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Eww, stray code.

gpt4all-backend/llamamodel.cpp Show resolved Hide resolved
gpt4all-backend/llamamodel.cpp Show resolved Hide resolved
@manyoso
Copy link
Collaborator

manyoso commented May 15, 2024

The other thing I'll say about this is that it encourages us to upstream our gptj support or drop it as the code copying is biting us.

@cebtenzzre cebtenzzre merged commit 9f9d8e6 into main May 15, 2024
6 of 19 checks passed
@cebtenzzre cebtenzzre deleted the fix-archless-gguf-crash branch May 15, 2024 17:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Installing a stable-diffusion.cpp model crashes GPT4All
2 participants