Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error in Prompt.load(from_hf) : model_card (NoneType) is not iterable #613

Closed
remiconnesson opened this issue Apr 17, 2024 · 3 comments
Closed

Comments

@remiconnesson
Copy link

the snippet from the video https://www.youtube.com/watch?v=JjgqOZ2v5oU

from llmware.prompts import Prompt
model_name = "llmware/bling-1b-0.1"

prompter = Prompt().load_model(model_name, from_hf=True)

yields

config.json: 100%|██████████| 2.27k/2.27k [00:00<00:00, 7.64MB/s]
pytorch_model.bin: 100%|██████████| 4.11G/4.11G [00:31<00:00, 131MB/s] 
tokenizer.json: 100%|██████████| 2.11M/2.11M [00:00<00:00, 4.03MB/s]
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[39], line 3
      1 model_name = "llmware/bling-1b-0.1"
----> 3 prompter = Prompt().load_model(model_name, from_hf=True)

File [~/venv/lib/python3.10/site-packages/llmware/prompts.py:270](http://bcgx.luca-caniparoli.nibble.vip:8888/home/bcgx/venv/lib/python3.10/site-packages/llmware/prompts.py#line=269), in Prompt.load_model(self, gen_model, api_key, from_hf, trust_remote_code, use_gpu, sample, get_logits, max_output, temperature)
    267     hf_tokenizer = AutoTokenizer.from_pretrained(gen_model, trust_remote_code=trust_remote_code)
    269 #   now, we have 'imported' our own custom 'instruct' model into llmware
--> 270 self.llm_model = self.model_catalog.load_hf_generative_model(custom_hf_model, hf_tokenizer,
    271                                                          instruction_following=False,
    272                                                          prompt_wrapper="human_bot")
    273 # prepare 'safe name' without file paths
    274 self.llm_model.model_name = re.sub("[/]","---",gen_model)

File [~/venv/lib/python3.10/site-packages/llmware/models.py:766](http://bcgx.luca-caniparoli.nibble.vip:8888/home/bcgx/venv/lib/python3.10/site-packages/llmware/models.py#line=765), in ModelCatalog.load_hf_generative_model(self, model, tokenizer, prompt_wrapper, instruction_following)
    760 def load_hf_generative_model(self, model,tokenizer,prompt_wrapper=None,
    761                              instruction_following=False):
    763     """ Loads and integrates a Huggingface generative decoder-based 'causal' model with limited options
    764     to control model preprocessing prompt behavior """
--> 766     model = HFGenerativeModel(model, tokenizer, prompt_wrapper=prompt_wrapper,
    767                               instruction_following=instruction_following)
    769     return model

File [~/venv/lib/python3.10/site-packages/llmware/models.py:4152](http://bcgx.luca-caniparoli.nibble.vip:8888/home/bcgx/venv/lib/python3.10/site-packages/llmware/models.py#line=4151), in HFGenerativeModel.__init__(self, model, tokenizer, model_name, api_key, model_card, prompt_wrapper, instruction_following, context_window, use_gpu_if_available, trust_remote_code, sample, max_output, temperature, get_logits)
   4149 self.prompt_wrapper = prompt_wrapper
   4150 self.instruction_following = instruction_following
-> 4152 if "instruction_following" in model_card:
   4153     self.instruction_following = model_card["instruction_following"]
   4154 else:

TypeError: argument of type 'NoneType' is not iterable
@ajarcik
Copy link

ajarcik commented Apr 19, 2024

Same issue, I get the error message when I try to pass in other hugging face models as well.

@doberst
Copy link
Contributor

doberst commented Apr 20, 2024

@remiconnesson and @ajarcik - thanks for sharing this issue so we can fix it. For the example above, please remove the 'from_hf' flag, and everything should work fine, e.g.: prompter = Prompt().load_model(model_name) with model_name = "llmware/bling-1b-0.1" (We will update the example code too.)

When the 'from_hf =True' is set, we pass the model name and pull directly from HF/transformers Auto, and then take the instantiated HF object and wrap the HFGenerative class around it - in the course of doing that, we missed a safety check on a null config setting - which we are fixing in parallel - and should be merged in the code later today.

As an alternative to the 'from_hf' approach, you can register any custom model (Pytorch or GGUF) in the llmware ModelCatalog with a one-line registration process, and then pull directly with .load_model without using the from_hf flag.

Will keep the issue open until you confirm back that all good.

@doberst
Copy link
Contributor

doberst commented May 23, 2024

No activity - closing due to stale issue, which seems to have been resolved in prior message on April 20. The example code was updated too. If any ongoing problems, please raise a new issue.

@doberst doberst closed this as completed May 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants