You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@remiconnesson and @ajarcik - thanks for sharing this issue so we can fix it. For the example above, please remove the 'from_hf' flag, and everything should work fine, e.g.: prompter = Prompt().load_model(model_name) with model_name = "llmware/bling-1b-0.1" (We will update the example code too.)
When the 'from_hf =True' is set, we pass the model name and pull directly from HF/transformers Auto, and then take the instantiated HF object and wrap the HFGenerative class around it - in the course of doing that, we missed a safety check on a null config setting - which we are fixing in parallel - and should be merged in the code later today.
As an alternative to the 'from_hf' approach, you can register any custom model (Pytorch or GGUF) in the llmware ModelCatalog with a one-line registration process, and then pull directly with .load_model without using the from_hf flag.
Will keep the issue open until you confirm back that all good.
No activity - closing due to stale issue, which seems to have been resolved in prior message on April 20. The example code was updated too. If any ongoing problems, please raise a new issue.
the snippet from the video https://www.youtube.com/watch?v=JjgqOZ2v5oU
yields
The text was updated successfully, but these errors were encountered: