Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ollama support? #1

Open
Madd0g opened this issue Nov 27, 2023 · 1 comment
Open

Ollama support? #1

Madd0g opened this issue Nov 27, 2023 · 1 comment

Comments

@Madd0g
Copy link

Madd0g commented Nov 27, 2023

Hey, very exciting project, I feel like I wanted to build something like this for a while now.

I've recently started playing with local models using ollama (hosted by a small computer on my network).

Wondering if it is possible to connect this to ollama? (maybe I'm asking if it is possible to use this without the router that requires nvidia hardware)

Also, beyond installation instructions and high level design, is there a wiki where it is explained how individual parts work?

Thanks!

@noco-ai
Copy link
Owner

noco-ai commented Dec 6, 2023

Hello, thanks for the kudos on the project. I just took a quick look at Ollama and it could be integrated in the same way the software integrates with other external chat APIs (like OpenAI), but it would have to be done at the python worker level and that would not solve the issue of the Nvidia requirement. I am currently brainstorming how to remove the Nvidia requirement (real dependency is on ExLlama for lora hot swapping) but have yet to come up with a solution. I will be creating more verbose documentation and hosting it somewhere in the near future to gives a detailed explanation of the induvial parts of the stack.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants