-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add OpenELM #3910
Comments
Not supported in llama.cpp yet, there's an issue for it, ggerganov/llama.cpp#6868, labeled as a good first issue, if someone with C++ and Python experience wants to tackle it. 👍 |
Interesting, I haven't seen this issue, and was trying to upload this model. 🫣 |
What is the requirement for llama.cpp. |
1 similar comment
What is the requirement for llama.cpp. |
Ollama makes heavy use of llama.ccp, it's the backend Ollama uses. When you start Ollama, it starts a llama.cpp server. When you chat with an LLM using Ollama is forwards it to the llama.cpp server. Line 73 in 2bed629
Lines 1315 to 1320 in 2bed629
You can see the llama.cpp submodude under https://github.com/ollama/ollama/tree/main/llm. |
Update on the OpenELM support, a draft PR has been opened ggerganov/llama.cpp#6986, and @joshcarp is looking for anyone to help out. I'm certain it would be appreciated if anyone who has experience with C++, Python, or something related can help. 👍 |
Apple released several open source LLMs that are designed to run on-device.
Huggingface Link
The text was updated successfully, but these errors were encountered: