Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhancement]: when set LLM_OPTION=llama_2,project download large model file. although download speed of command line terminal is almost 5Mb/s, the download process almost interrupt #81

Open
gobigrassland opened this issue Sep 5, 2023 · 2 comments
Assignees
Labels
enhancement New feature or request

Comments

@gobigrassland
Copy link

What would you like to be added?

is there other way to download llm model in advance, and place it in the appropriate directory?

Why is this needed?

No response

Anything else?

No response

@gobigrassland gobigrassland added the enhancement New feature or request label Sep 5, 2023
@jaelgu
Copy link
Collaborator

jaelgu commented Sep 5, 2023

@junjiejiangjjj @wxywb Any idea?

@junjiejiangjjj
Copy link
Collaborator

Can an additional parameter be added to specify the model path/location?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants