Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

For OpenAI piece, allow defining custom URL to use alternate AI server #4600

Open
ikus060 opened this issue Apr 30, 2024 · 0 comments
Open

Comments

@ikus060
Copy link

ikus060 commented Apr 30, 2024

Describe your usecase.
I plan to make use of llama-cpp project to host a private instance train with our own data. llama-cpp provide a way to run AI model directly on CPU using various model.

Using: https://github.com/getumbrel/llama-gpt it's possible to how our own AI server to generate specific content.

Since most implementation provide a OpenAI compatible API, it's would be nice to allow user to define specific URL when creating OpenAI connection or to mimic the OpenAI piece to support self-hosted llamap-cpp server.

Describe alternatives you've considered
It might be possible to make direct call to HTTP REST API. I did not check if that was working properly.

But the fact, it's possible to run a self-hosted compatible OpenAI server might be highlight in the documentation or in the piece.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants