Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model folder missing #166

Closed
inra0 opened this issue May 7, 2024 · 6 comments
Closed

Model folder missing #166

inra0 opened this issue May 7, 2024 · 6 comments
Labels
bug report under review repository owner use only

Comments

@inra0
Copy link

inra0 commented May 7, 2024

image
image
image

When I query with chunks only, it works but when I unchecked the chunk only, it pop up warning as the first image. Solar Instruct folder is missing, but the options in models says that its already downloaded.

After the error warning, I cant click "submit question" button

@BBC-Esq
Copy link
Owner

BBC-Esq commented May 7, 2024

I don't see the solar model within the folder containing the other models, which is the last image. That is very strange. Also, I'm not sure why there is the 12b model from stabilityai showing; that was removed from my release (but still left commented out I believe). Can you try closing down the program, restarting, and sending me a screenshot of what the models tab looks like. It should not be showing solar as downloaded unless there's a specific folder for it in the "Models" folder...

It should say "no" and allow you to download it.

Also, can you try some of the other chat models to see if they're working?

Lastly, I noticed that you're getting a pynvml error...what graphics card are you using?

@inra0
Copy link
Author

inra0 commented May 7, 2024

image
image
image
image

I'm using laptop with dual graphic, AMD and RTX 3050 (recent update, latest cuda installed)
restarted, download dolphin llama3. Works on chunks only, but seems I run out of gpu ram when chunk only unchecked

@BBC-Esq
Copy link
Owner

BBC-Esq commented May 7, 2024

Yep, unchecking "chunks only" wouldn't change the vram usage, the model would still be loaded. But it's SUPPOSED to automatically remove the "local" model when you choose the use LM Studio radio button...With that being said, check out the release page and it shows that dolphin uses 9.2 gb. Also, my program doesn't yet have the ability to use multiple gpus.

I'm seriously considering switching to llama-cpp, which allows one to offload part to the GPU and part to system ram...but I wanted to get this release out ASAP. Let me know if reloading and restarting allows you to download the solar model please.

@inra0
Copy link
Author

inra0 commented May 8, 2024 via email

@BBC-Esq BBC-Esq added the bug report under review repository owner use only label May 15, 2024
@BBC-Esq
Copy link
Owner

BBC-Esq commented May 17, 2024

Does your VRAM usage roughly match what my release page says it should be for the various models? Are you still unable to download the SOLAR model? Any more details?

@BBC-Esq
Copy link
Owner

BBC-Esq commented May 20, 2024

Feel free to reopen if this issue persists.

@BBC-Esq BBC-Esq closed this as completed May 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug report under review repository owner use only
Projects
None yet
Development

No branches or pull requests

2 participants