You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe your usecase.
I plan to make use of llama-cpp project to host a private instance train with our own data. llama-cpp provide a way to run AI model directly on CPU using various model.
Since most implementation provide a OpenAI compatible API, it's would be nice to allow user to define specific URL when creating OpenAI connection or to mimic the OpenAI piece to support self-hosted llamap-cpp server.
Describe alternatives you've considered
It might be possible to make direct call to HTTP REST API. I did not check if that was working properly.
But the fact, it's possible to run a self-hosted compatible OpenAI server might be highlight in the documentation or in the piece.
The text was updated successfully, but these errors were encountered:
Describe your usecase.
I plan to make use of llama-cpp project to host a private instance train with our own data. llama-cpp provide a way to run AI model directly on CPU using various model.
Using: https://github.com/getumbrel/llama-gpt it's possible to how our own AI server to generate specific content.
Since most implementation provide a OpenAI compatible API, it's would be nice to allow user to define specific URL when creating OpenAI connection or to mimic the OpenAI piece to support self-hosted llamap-cpp server.
Describe alternatives you've considered
It might be possible to make direct call to HTTP REST API. I did not check if that was working properly.
But the fact, it's possible to run a self-hosted compatible OpenAI server might be highlight in the documentation or in the piece.
The text was updated successfully, but these errors were encountered: