Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Llama-CPP Install Issue - Windows #10820

Open
ElliottDyson opened this issue Apr 21, 2024 · 4 comments
Open

Llama-CPP Install Issue - Windows #10820

ElliottDyson opened this issue Apr 21, 2024 · 4 comments

Comments

@ElliottDyson
Copy link

The newest drivers are in use, the system is a Ryzen 2700x CPU with 16GB of RAM and a 16GB A770 GPU on Windows 11.

The instructions in the docs were followed precisely.

Upon attempting to execute main.exe (either with -h or when launching a .gguf file) we get a system error stating that code execution couldn't proceed because "mkl_sycl_blas.4.dll was not found". I have tried wiping the folder and reactivating the batch file but with no change.

Some help would be much appreciated. Thank you.

@rnwang04
Copy link
Contributor

Hi @ElliottDyson
This error (mkl_sycl_blas.4.dll was not found) is caused by missing of oneapi.
Have you rightly install oneapi 2024.0 and source it by call "C:\Program Files (x86)\Intel\oneAPI\setvars.bat" (if you use offline installed oneAPI) ?

@ElliottDyson
Copy link
Author

ElliottDyson commented Apr 22, 2024

Hi @ElliottDyson
This error (mkl_sycl_blas.4.dll was not found) is caused by missing of oneapi.
Have you rightly install oneapi 2024.0 and source it by call "C:\Program Files (x86)\Intel\oneAPI\setvars.bat" (if you use offline installed oneAPI) ?

This is definitely odd behaviour. I have done this yes, and the call printed out the correct statements too. I know I can usually get this to work properly as I have a separate environment for running models directly using the transformers library by huggingface and that also seems to work as I'm using imports from Ipex-LLM and inferencing them on device without error.

@rnwang04
Copy link
Contributor

Hi @ElliottDyson ,
Suppose you have two conda enviroments, and just named them llm-cpp for cpp-based and llm for transformers-based code,
Is your question why you can run the program directly in the llm environment, but need to manually call "C:\Program Files (x86)\Intel\oneAPI\setvars.bat" in the llm-cpp environment?
If so, the problem I guess is you are not using pip installer to install OneAPI in your llm-cpp conda env.
Just try with pip install dpcpp-cpp-rt==2024.0.2 mkl-dpcpp==2024.0.0 onednn==2024.0.0 in your llm-cpp conda env,
then there is no need to call "C:\Program Files (x86)\Intel\oneAPI\setvars.bat" manully. And don't forget to execute main.exe under the llm-cpp env.

@ElliottDyson
Copy link
Author

ElliottDyson commented Apr 22, 2024

Hi @ElliottDyson ,
Suppose you have two conda enviroments, and just named them llm-cpp for cpp-based and llm for transformers-based code,
Is your question why you can run the program directly in the llm environment, but need to manually call "C:\Program Files (x86)\Intel\oneAPI\setvars.bat" in the llm-cpp environment?
If so, the problem I guess is you are not using pip installer to install OneAPI in your llm-cpp conda env.
Just try with pip install dpcpp-cpp-rt==2024.0.2 mkl-dpcpp==2024.0.0 onednn==2024.0.0 in your llm-cpp conda env,
then there is no need to call "C:\Program Files (x86)\Intel\oneAPI\setvars.bat" manully. And don't forget to execute main.exe under the llm-cpp env.

No, I only mentioned the LLM (transformer based) environment to help explain that me following the installation process, my drivers, oneAPI, and visual studio, are not the causes of the problems being encountered in the llm-cpp environment. Everything to do with llama cpp is being done in the llm-cpp environment. Sorry for the confusion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants