You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
However, it failed to load the model and showed errors as follows:
File "examples/example_chat_completion.py", line 64, in main
model = load_model(model_name, quantization, use_fast_kernels)
TypeError: load_model() takes 2 positional arguments but 3 were given
I checked the history of llama_recipes.inference.model_utils.load_model and found it changed last month. I installed llama-recipes from source and fixed this issue.
Is the pip-downloaded version outdated?
Error logs
File "examples/example_chat_completion.py", line 64, in main
model = load_model(model_name, quantization, use_fast_kernels)
TypeError: load_model() takes 2 positional arguments but 3 were given
Expected behavior
Update the pip packages.
The text was updated successfully, but these errors were encountered:
System Info
[pip3] flake8==7.0.0
[pip3] flake8-bugbear==24.2.6
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] torch==2.2.1+cu118
[pip3] torchdata==0.7.1+cpu
[pip3] torchtext==0.17.1
[pip3] triton==2.2.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] torch 2.2.1+cu118 pypi_0 pypi
[conda] torchdata 0.7.1+cpu pypi_0 pypi
[conda] torchtext 0.17.1 pypi_0 pypi
[conda] triton 2.2.0 pypi_0 pypi
Information
🐛 Describe the bug
I followed the instructions to install and infer Llama-2-chat-70b.
However, it failed to load the model and showed errors as follows:
I checked the history of
llama_recipes.inference.model_utils.load_model
and found it changed last month. I installedllama-recipes
from source and fixed this issue.Is the pip-downloaded version outdated?
Error logs
Expected behavior
Update the pip packages.
The text was updated successfully, but these errors were encountered: