Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OSError: [WinError -529697949] Windows Error 0xe06d7363 #5

Open
94bb494nd41f opened this issue May 26, 2023 · 0 comments
Open

OSError: [WinError -529697949] Windows Error 0xe06d7363 #5

94bb494nd41f opened this issue May 26, 2023 · 0 comments

Comments

@94bb494nd41f
Copy link

I tried the following models:

MODEL_NAME = 'ggml-vicuna-7b-q4_0.bin'
MODEL_PATH = r"D:\\ggml-vicuna-7b-q4_0.bin"
MODEL_NAME = 'GPT4All-13B-snoozy.ggmlv3.q4_1.bin'
MODEL_PATH = r"D:\\GPT4All-13B-snoozy.ggmlv3.q4_1.bin"
MODEL_NAME = 'ggml-old-vic7b-q4_0.bin'
MODEL_PATH = r"C:\\Users\\elnuevo\\Downloads\\ggml-old-vic7b-q4_0.bin"

But only the GPT4All models seems to work, as it did not crash but took forever to deliver an answer, so i still aborted.

(local_llama_newpythno) C:\Users\elnuevo\local_llama>python -m streamlit run local_llama.py

  You can now view your Streamlit app in your browser.

  Local URL: http://localhost:8501
  Network URL: http://192.168.178.35:8501

A Review Article Access Recirculation Among End Stage Renal Disease Patients Undergoing Maintenance Hemodialysis.pdf
llama.cpp: loading model from C:\\Users\\elnuevo\\Downloads\\ggml-old-vic7b-q4_0.bin
2023-05-26 16:05:27.237 Uncaught app exception
Traceback (most recent call last):
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
    exec(code, module.__dict__)
  File "C:\Users\elnuevo\local_llama\local_llama.py", line 146, in <module>
    query_index(query_u=user_input)
  File "C:\Users\elnuevo\local_llama\local_llama.py", line 92, in query_index
    response = query_engine.query(query_u)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\llama_index\indices\query\base.py", line 23, in query
    response = self._query(str_or_query_bundle)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\llama_index\query_engine\retriever_query_engine.py", line 145, in _query
    response = self._response_synthesizer.synthesize(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\llama_index\indices\query\response_synthesis.py", line 178, in synthesize
    response_str = self._response_builder.get_response(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\llama_index\indices\response\compact_and_refine.py", line 57, in get_response
    response = super().get_response(
               ^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\llama_index\token_counter\token_counter.py", line 78, in wrapped_llm_predict
    f_return_val = f(_self, *args, **kwargs)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\llama_index\indices\response\refine.py", line 52, in get_response
    response = self._give_response_single(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\llama_index\indices\response\refine.py", line 89, in _give_response_single
    ) = self._service_context.llm_predictor.predict(
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\llama_index\llm_predictor\base.py", line 245, in predict
    llm_prediction = self._predict(prompt, **prompt_args)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\llama_index\llm_predictor\base.py", line 213, in _predict
    llm_prediction = retry_on_exceptions_with_backoff(
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\llama_index\utils.py", line 177, in retry_on_exceptions_with_backoff
    return lambda_fn()
           ^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\llama_index\llm_predictor\base.py", line 214, in <lambda>
    lambda: llm_chain.predict(**full_prompt_args),
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\langchain\chains\llm.py", line 213, in predict
    return self(kwargs, callbacks=callbacks)[self.output_key]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\langchain\chains\base.py", line 140, in __call__
    raise e
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\langchain\chains\base.py", line 134, in __call__
    self._call(inputs, run_manager=run_manager)
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\langchain\chains\llm.py", line 69, in _call
    response = self.generate([inputs], run_manager=run_manager)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\langchain\chains\llm.py", line 79, in generate
    return self.llm.generate_prompt(
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\langchain\llms\base.py", line 134, in generate_prompt
    return self.generate(prompt_strings, stop=stop, callbacks=callbacks)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\langchain\llms\base.py", line 191, in generate
    raise e
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\langchain\llms\base.py", line 185, in generate
    self._generate(prompts, stop=stop, run_manager=run_manager)
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\langchain\llms\base.py", line 438, in _generate
    else self._call(prompt, stop=stop)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\elnuevo\local_llama\local_llama.py", line 44, in _call
    llm = Llama(model_path=MODEL_PATH, n_threads=NUM_THREADS, n_ctx=n_ctx)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\llama_cpp\llama.py", line 158, in __init__
    self.ctx = llama_cpp.llama_init_from_file(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\llama_cpp\llama_cpp.py", line 262, in llama_init_from_file
    return _lib.llama_init_from_file(path_model, params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: [WinError -529697949] Windows Error 0xe06d7363
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant