Issues: ollama/ollama
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Error 403 when trying to call api/chat or api/generate from REST client
bug
Something isn't working
#4115
opened May 3, 2024 by
MaheshAwasare
ollama create
tries to pull model when using quotes in FROM
line
bug
#4114
opened May 3, 2024 by
savareyhano
Confusing error on linux with noexec on /tmp - Error: llama runner process no longer running: 1
bug
Something isn't working
#4105
opened May 2, 2024 by
utility-aagrawal
Ollama running in docker with concurrent requests doesn't work
bug
Something isn't working
docker
Issues relating to using ollama in containers
#4102
opened May 2, 2024 by
BBjie
Support NVIDIAs Llama fine-tune (chatQA-1.5)
model request
Model requests
#4101
opened May 2, 2024 by
DuckyBlender
Error: do encode request: Post "http://127.0.0.1:39207/tokenize": EOF
bug
Something isn't working
#4100
opened May 2, 2024 by
j2l
Please support gfx1103 in rocm docker image
feature request
New feature or request
#4099
opened May 2, 2024 by
LaurentBonnaud
Ollama model stuck when executing commands.
bug
Something isn't working
docker
Issues relating to using ollama in containers
#4098
opened May 2, 2024 by
rk-spirinova
Generation Request Failing When Ollama Server Running Inside a Docker Container
bug
Something isn't working
docker
Issues relating to using ollama in containers
#4097
opened May 2, 2024 by
Deepansharora27
Is there a problem with the document?
bug
Something isn't working
windows
#4094
opened May 2, 2024 by
ggjk616
crash loading llama-3-chinese-8b-instruct model
bug
Something isn't working
model request
Model requests
#4080
opened May 1, 2024 by
jiangweiatgithub
About OLLAMA_PARALLEL split the max context length
bug
Something isn't working
#4079
opened May 1, 2024 by
DirtyKnightForVi
invalid file magic while importing llama3 70b into ollama
bug
Something isn't working
#4075
opened May 1, 2024 by
David20080125
Grammar Guided response from model.
feature request
New feature or request
#4074
opened May 1, 2024 by
NeevJewalkar
Ollama should prevent sleep when working.
feature request
New feature or request
good first issue
Good for newcomers
windows
#4072
opened May 1, 2024 by
owenzhao
Support to build llama.cpp with Intel oneMKL
feature request
New feature or request
#4069
opened May 1, 2024 by
MarkWard0110
[FEATURE] Add llamascript to community projects
feature request
New feature or request
#4061
opened Apr 30, 2024 by
WolfTheDeveloper
Previous Next
ProTip!
What’s not been updated in a month: updated:<2024-04-03.