Skip to content

Issues: ollama/ollama

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Label
Filter by label
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Milestones
Filter by milestone
Assignee
Filter by who’s assigned
Sort

Issues list

Please add GLM4 models model request Model requests
#5002 opened Jun 12, 2024 by DK013
please add support for rk3588 NPU feature request New feature or request
#5001 opened Jun 12, 2024 by pine64noob
Regression on ollama docker images >=0.1.33-rocm: rocBLAS does not find secondary GPU for Tensile amd Issues relating to AMD GPUs and ROCm bug Something isn't working
#5000 opened Jun 12, 2024 by icodeforyou-dot-net
Error "transferring model data " when creating a model bug Something isn't working
#4998 opened Jun 12, 2024 by tigerkin89
Ollama GPU not loding properly bug Something isn't working
#4995 opened Jun 12, 2024 by tankvpython
support for recurrent gemma model request Model requests
#4994 opened Jun 12, 2024 by userforsource
AI Models stop working after few user only messages. bug Something isn't working
#4993 opened Jun 12, 2024 by TheUntitledGoose
error pulling llama2 manifest bug Something isn't working
#4992 opened Jun 12, 2024 by adityapandit1798
First value different on CUDA/ROCM when setting seed amd Issues relating to AMD GPUs and ROCm bug Something isn't working nvidia Issues relating to Nvidia GPUs and CUDA
#4990 opened Jun 12, 2024 by jmorganca
Failed to acquire semaphore" error="context canceled" bug Something isn't working
#4989 opened Jun 12, 2024 by travisgu
[Model Request] Add dolphin-qwen2 model request Model requests
#4988 opened Jun 12, 2024 by mak448a
Ollama not using GPU after OS Reboot bug Something isn't working
#4984 opened Jun 11, 2024 by lukasmwerner
SIGSEGV during ollama serve cgo execution CUDA amd Issues relating to AMD GPUs and ROCm bug Something isn't working
#4982 opened Jun 11, 2024 by 7910f6ba7ee4
0xc0000409 error with llava-phi3 bug Something isn't working
#4979 opened Jun 11, 2024 by razvanab
Error: pull model manifest: Get bug Something isn't working
#4976 opened Jun 11, 2024 by funnyPhani
OLLAMA_MODEL_DIR is not reflecting on MacOS bug Something isn't working
#4973 opened Jun 11, 2024 by yusufaly
How to disallow the use of both gpu and cpu feature request New feature or request
#4971 opened Jun 11, 2024 by xiaohanglei
Error pulling Quen2 models: unknown pre-tokenizer type: 'qwen2' bug Something isn't working
#4969 opened Jun 11, 2024 by agilebean
API Silently Truncates Conversation bug Something isn't working
#4967 opened Jun 10, 2024 by flu0r1ne
ProTip! Type g p on any issue or pull request to go back to the pull request listing page.