Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to load LLM2Vec config? #64

Closed
xiaoyuqian2 opened this issue May 9, 2024 · 2 comments
Closed

Is it possible to load LLM2Vec config? #64

xiaoyuqian2 opened this issue May 9, 2024 · 2 comments

Comments

@xiaoyuqian2
Copy link

xiaoyuqian2 commented May 9, 2024

Is it possible to load LLM2Vec encoder model config without loading the weights, like below?

config = AutoConfig.from_pretrained("McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp")
model = AutoModel.from_config(config)

This sounds like an uncommon request. But what I want to achieve is:

  • load LLM2Vec model config
  • download all weights needed (including the base model weights and the adapter weights) explicitly to a folder llm2vec_weights
  • initialize the model later by running AutoModel.from_pretrained(llm2vec_weights, trust_remote_code=True).

Is it possible? If so, what are the weights I should download (e.g. for LLM2Vec encoder model with llama2-7b-chat as base)? Thank you!

@vaibhavad
Copy link
Collaborator

Hi @xiaoyuqian2,

Apologies for the late response. Do you want this setup to run the models offline? In that case, the steps detailed here will be an easier and simpler case.

If you have a different use case, then let me know and I will try to come up with steps to load the model following the constrains mentioned.

@vaibhavad
Copy link
Collaborator

Closing as it is stale. Feel free to re-open if you have any further questions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants