Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

autocompletion does not work in VSCode #37

Open
arturshevchenko opened this issue Apr 11, 2024 · 6 comments
Open

autocompletion does not work in VSCode #37

arturshevchenko opened this issue Apr 11, 2024 · 6 comments
Labels
bug Something isn't working

Comments

@arturshevchenko
Copy link

Describe the bug

autocompletion does not work at all

How to reproduce

install olama, run ollama, pull deepseek-coder, run deepseek-coder
write in configs deepseek-coder

CleanShot 2024-04-11 at 11 04 32

write some code and wait untill completion

Expected behavior

code completion works

Actual behavior

autocompletion does not work

@arturshevchenko arturshevchenko added the bug Something isn't working label Apr 11, 2024
@srikanth235
Copy link
Owner

Hey,

Could you please check the logs by using the 'Privy: Show Logs' option in the command palette? Check to see if the requests are being sent to the Ollama instance. You should see something similar to the attached screenshot.

Screenshot 2024-04-11 at 19 55 33 89/ce035e4b-0095-4c74-8607-e6b92ab2e971">

@arturshevchenko
Copy link
Author

in logs response is shown
CleanShot 2024-04-11 at 17 34 54@2x

@srikanth235
Copy link
Owner

For auto-completion, can you please try with base models as shown in the image.

Screenshot 2024-04-12 at 10 18 05

@arturshevchenko
Copy link
Author

the same :(

@srikanth235
Copy link
Owner

I wasn't able to reproduce this issue. A few things to check are:

  1. Are there any other installed extensions that also provide autocompletion. If yes, please try disabling them.
  2. Please try updating your VSCode to the latest version.
  3. Try re-installing the extension and restarting the VSCode editor.

@rainabba
Copy link

@srikanth235 Thanks for the tip about using 1.3b-base. I too was trying to use a non -base model.

@arturshevchenko, from his tip I realized the naming held significance here and the fact that I was able to use the 7.8b-base, I think the "-base" indicates training that scopes the Q/A to code. You got a response, but it was verbose and didn't have the <> tags I'm seeing in my now-working query/response. If your issue is the same, switch to one of the 3 "-base" models depending on your hardware capabilities. For some reason, ollama isn't using my GPU now though so responses take so long it "look" like nothing is happening until I watch the Privy log output.

@srikanth235 Thanks for the awesome extension!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants