New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Llama-CPP Install Issue - Windows #10820
Comments
Hi @ElliottDyson |
This is definitely odd behaviour. I have done this yes, and the call printed out the correct statements too. I know I can usually get this to work properly as I have a separate environment for running models directly using the transformers library by huggingface and that also seems to work as I'm using imports from Ipex-LLM and inferencing them on device without error. |
Hi @ElliottDyson , |
No, I only mentioned the LLM (transformer based) environment to help explain that me following the installation process, my drivers, oneAPI, and visual studio, are not the causes of the problems being encountered in the llm-cpp environment. Everything to do with llama cpp is being done in the llm-cpp environment. Sorry for the confusion. |
The newest drivers are in use, the system is a Ryzen 2700x CPU with 16GB of RAM and a 16GB A770 GPU on Windows 11.
The instructions in the docs were followed precisely.
Upon attempting to execute main.exe (either with -h or when launching a .gguf file) we get a system error stating that code execution couldn't proceed because "mkl_sycl_blas.4.dll was not found". I have tried wiping the folder and reactivating the batch file but with no change.
Some help would be much appreciated. Thank you.
The text was updated successfully, but these errors were encountered: