New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for tabAutocompleteModel remotely #1215
Comments
@xndpxs this is a consequence of how Ollama manages models—they won't always acknowledge an alias to the same one you have downloaded. If you set |
Nope didn't work. I've tried with all combinations possible (latest, 2b, -, :) none of them worked. I've tried also specifying the api and not specifying it. None of that worked. I tried with this "tabAutocompleteModel": {
"title": "Tab Autocomplete Model",
"provider": "ollama",
"model": "llama3",
"apiBase": "http://ip:port"
}, And it doesn't give me errors, but it looks like it is using llama3 instead of starcoder. |
Not sure if it's related but I had pretty much the same issue yesterday when I installed the latest version (0.8.25) of the VSCode* version of the extension. While chat worked just fine with my "remote" Ollama install on a local network, auto-complete was logging errors in the VSCode debug console along the lines of,
|
I don't know if this could be relevant to anyone, but in my case it works correctly with this setup:
My IDE is PHPStorm and I use Ollama installed on another computer of my local network. When I execute
|
It was fixed! OLLAMA_HOST=ip:port ollama list
NAME ID SIZE MODIFIED
llama3:latest a6990ed6be41 4.7 GB 5 hours ago It just shows 1 of the 2 models I have installed OLLAMA_HOST=ip:port ollama pull starcoder2:latest and then It woked good. Honestly, I don't remember if I installed starcoder2 before or after I modified the systemd config file: [Unit]
Description=Ollama Service
After=network-online.target
[Service]
ExecStart=/usr/bin/ollama serve
Environment=OLLAMA_HOST=ip:port
User=ollama
Group=ollama
Restart=always
RestartSec=3
[Install]
WantedBy=default.target |
Before submitting your bug report
Relevant environment info
Description
Hi all, my Llama3 is working flawlessly on LAN.
The problem is that I need starcoder2 for the tabAutoCompleteModel Option.
I can see both models with ollama list:
Continue is working with llama3 remotely, but I can't seem to be able to configure tabAutoCompleteModel in Continue with starcoder2 from the same server, I am getting this error:
The model 'starcoder2' was not found
But In fact as you can see it is running.
To reproduce
I am configuring
Log output
The text was updated successfully, but these errors were encountered: