Skip to content

Issues: ggerganov/llama.cpp

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Label
Filter by label
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Milestones
Filter by milestone
Assignee
Filter by who’s assigned
Sort

Issues list

Bug: Fatal signal 11 (SIGSEGV) on Google Pixel 8 (dart) bug-unconfirmed critical severity Used to report critical severity bugs in llama.cpp (e.g. Crashing, Corrupted, Dataloss)
#7908 opened Jun 12, 2024 by g1henx
Bug: The "server" provided web-ui chat seems to sometimes not properly quote "<" ">" charaters in its HTML output. bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)
#7905 opened Jun 12, 2024 by ghchris2021
Feature Request: Add VideoLLaMA2 support
#7900 opened Jun 12, 2024 by gulldan
Bug: Qwen2 BOS token? bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)
#7898 opened Jun 12, 2024 by Ph0rk0z
Bug: convert-hf-to-gguf.py fails for Gemma models bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)
#7897 opened Jun 12, 2024 by maab19
ci : self-hosted runner issue testing Everything test related
#7893 opened Jun 12, 2024 by ggerganov
Support for MatMul free LLMs enhancement New feature or request
#7889 opened Jun 11, 2024 by sdmorrey
4 tasks done
Bug: get-wikitext-103.sh seems not working bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)
#7878 opened Jun 11, 2024 by Eddie-Wang1120
Bug: GGML_ASSERT: ggml.c:12793: ne2 == ne02 zsh: abort ./finetune --model-base --train-data ./Llama3-8B-Chinese-Chat-fintune/111.tx bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)
#7877 opened Jun 11, 2024 by CodeBobobo
Bug: multithreading for requests,model infer service failed bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)
#7876 opened Jun 11, 2024 by liuzhipengchd
Feature Request: Add Paligemma support enhancement New feature or request
#7875 opened Jun 11, 2024 by nischalj10
4 tasks done
Bug: Random output after the last update bug-unconfirmed high severity Used to report high severity bugs in llama.cpp (Malfunctioning hinder important workflow)
#7874 opened Jun 11, 2024 by alexcardo
Bug: 'scripts/run-with-preset.py fails on --tensor-split` option when run on non-GPU-enabled system bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)
#7864 opened Jun 10, 2024 by HanClinto
Bug: Server ends up in infinite loop if number of requests in the batch is greater than parallel slots with system prompt bug-unconfirmed high severity Used to report high severity bugs in llama.cpp (Malfunctioning hinder important workflow)
#7834 opened Jun 8, 2024 by kdhingra307
iGPU offloading Bug: Memory access fault by GPU node-1 (appeared once only) AMD GPU Issues specific to AMD GPUs bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)
#7829 opened Jun 8, 2024 by eliranwong
Bug: CUDA enabled docker container fails to launch bug-unconfirmed critical severity Used to report critical severity bugs in llama.cpp (e.g. Crashing, Corrupted, Dataloss)
#7822 opened Jun 7, 2024 by mblunt
Bug: Running a large model through the server using vulkan backend always generates gibberish after first call. bug-unconfirmed medium severity Used to report medium severity bugs in llama.cpp (e.g. Malfunctioning Features but still useable)
#7819 opened Jun 7, 2024 by richardanaya
I am running two socket servers, and the CPU usage is at 50% bug-unconfirmed high severity Used to report high severity bugs in llama.cpp (Malfunctioning hinder important workflow)
#7812 opened Jun 7, 2024 by superLiben
ProTip! Updated in the last three days: updated:>2024-06-09.