Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 Bug Report: HuggingFace Mistral models not working #800

Open
2 tasks done
Victorivus opened this issue Dec 12, 2023 · 1 comment
Open
2 tasks done

🐛 Bug Report: HuggingFace Mistral models not working #800

Victorivus opened this issue Dec 12, 2023 · 1 comment

Comments

@Victorivus
Copy link
Contributor

📜 Description

Using a HuggingFace model from the Mistral family.

👟 Reproduction steps

with the parameters from the documentation:
LLM_NAME=huggingface
EMBEDDINGS_NAME=sentence-transformers/all-mpnet-base-v2

and setting the llm_name to 'mistralai/Mistral-7B-Instruct-v0.1' instead of 'Arc53/DocsGPT-7B' in the DocsGPT/application/llm/huggingface.py file. You launch the app and ask a question.

In the DocsGPT/application/api/answer/routes.py completion = llm.gen_stream( call must be also replaced by completion = llm.gen( to avoid NotImplementedErrorand get a reponse

👍 Expected behavior

It should work and answer

👎 Actual Behavior with Screenshots

It crashes with KeyError: 'mistral'

image

💻 Operating system

Linux

What browsers are you seeing the problem on?

No response

🤖 What development environment are you experiencing this bug on?

Docker

🔒 Did you set the correct environment variables in the right path? List the environment variable names (not values please!)

LLM_NAME
EMBEDDINGS_NAME
FLASK_APP
FLASK_DEBUG
CELERY_BROKER_URL
CELERY_RESULT_BACKEND

📃 Provide any additional context for the Bug.

The bug is corrected updating the transformers library pip install -U transformers

📖 Relevant log output

Downloading tokenizer_config.json: 100%|█████████| 1.47k/1.47k [00:00<00:00, 4.23MB/s]
Downloading tokenizer.model: 100%|█████████████████| 493k/493k [00:00<00:00, 11.9MB/s]
Downloading tokenizer.json: 100%|████████████████| 1.80M/1.80M [00:00<00:00, 10.1MB/s]
Downloading (…)cial_tokens_map.json: 100%|██████████| 72.0/72.0 [00:00<00:00, 757kB/s]
Downloading config.json: 100%|███████████████████████| 571/571 [00:00<00:00, 7.32MB/s]
Debugging middleware caught exception in streamed response at a point where response headers were already sent.
Traceback (most recent call last):
  File "/VSCode/DocsGPT/venv/lib/python3.10/site-packages/werkzeug/wsgi.py", line 256, in __next__
    return self._next()
  File "/VSCode/DocsGPT/venv/lib/python3.10/site-packages/werkzeug/wrappers/response.py", line 32, in _iter_encoded
    for item in iterable:
  File "/VSCode/DocsGPT/application/api/answer/routes.py", line 99, in complete_stream
    llm = LLMCreator.create_llm(settings.LLM_NAME, api_key=api_key)
  File "/VSCode/DocsGPT/application/llm/llm_creator.py", line 24, in create_llm
    return llm_class(*args, **kwargs)
  File "/VSCode/DocsGPT/application/llm/huggingface.py", line 23, in __init__
    model = AutoModelForCausalLM.from_pretrained(llm_name)
  File "/VSCode/DocsGPT/venv/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 456, in from_pretrained
    config, kwargs = AutoConfig.from_pretrained(
  File "/VSCode/DocsGPT/venv/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 957, in from_pretrained
    config_class = CONFIG_MAPPING[config_dict["model_type"]]
  File "/VSCode/DocsGPT/venv/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 671, in __getitem__
    raise KeyError(key)
KeyError: 'mistral'
127.0.0.1 - - [12/Dec/2023 15:08:27] "POST /stream HTTP/1.1" 200 -
^C
worker: Hitting Ctrl+C again will terminate all running tasks!

worker: Warm shutdown (MainProcess)

👀 Have you spent some time to check if this bug has been raised before?

  • I checked and didn't find similar issue

🔗 Are you willing to submit PR?

Yes, I am willing to submit a PR!

🧑‍⚖️ Code of Conduct

  • I agree to follow this project's Code of Conduct
@Victorivus
Copy link
Contributor Author

Victorivus commented Dec 12, 2023

This bug gets fixed with a pip install -U transformers.

The modified libs are here: #801

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant