Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to load model, Wrong MAGIC in header #27

Open
TeqGin opened this issue Jan 4, 2024 · 1 comment
Open

Failed to load model, Wrong MAGIC in header #27

TeqGin opened this issue Jan 4, 2024 · 1 comment

Comments

@TeqGin
Copy link

TeqGin commented Jan 4, 2024

I quantized the llama 7b-chat model by llama.cpp, and get model ggml-model-q4_0.gguf. But llama.go seems not support the gguf version,
it shows the error:
`
[ERROR] Invalid model file '../llama.cpp/models/7B/ggml-model-q4_0.gguf'! Wrong MAGIC in header

[ ERROR ] Failed to load model "../llama.cpp/models/7B/ggml-model-q4_0.gguf"
`

@macie
Copy link

macie commented Jan 20, 2024

Llama.cpp project is in an active development, and from time to time it introduces breaking changes (in gguf format too). This project stopped its development around April 2023, so It probably isn't useful with today's models.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants