Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Installation Errors #6

Closed
tombinary07 opened this issue Apr 25, 2024 · 5 comments
Closed

Installation Errors #6

tombinary07 opened this issue Apr 25, 2024 · 5 comments

Comments

@tombinary07
Copy link

DEPRECATION: omegaconf 2.0.6 has a non-standard dependency specifier PyYAML>=5.1.*. pip 24.1 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of omegaconf or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at pypa/pip#12063

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
realtimestt 0.1.15 requires scipy==1.12.0, but you have scipy 1.11.4 which is incompatible.
realtimestt 0.1.15 requires torch==2.2.2, but you have torch 2.1.2+cu118 which is incompatible.
realtimestt 0.1.15 requires torchaudio==2.2.2, but you have torchaudio 2.1.2+cu118 which is incompatible.
tts 0.22.0 requires librosa>=0.10.0, but you have librosa 0.9.1 which is incompatible.
tts 0.22.0 requires numpy==1.22.0; python_version <= "3.10", but you have numpy 1.26.4 which is incompatible.
stream2sentence 0.2.3 requires emoji==2.8.0, but you have emoji 2.10.1 which is incompatible.
Successfully installed torch-2.1.2+cu118 torchaudio-2.1.2+cu118
Successfully installed PyTorch and Torchaudio for CUDA 11.8.

Installing required deepspeed ...
Failed to install https://github.com/daswer123/deepspeed-windows/releases/download/11.2/deepspeed-0.11.2+cuda118-cp310-cp310-win_amd64.whl. Error: DEPRECATION: omegaconf 2.0.6 has a non-standard dependency specifier PyYAML>=5.1.*. pip 24.1 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of omegaconf or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at pypa/pip#12063

-- Configuring incomplete, errors occurred!

*** CMake configuration failed
error: subprocess-exited-with-error

× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.
full command: 'C:\Users\DEVO\AI_C\Linguflex\test_env\Scripts\python.exe' 'C:\Users\DEVO\AI_C\Linguflex\test_env\lib\site-packages\pip_vendor\pyproject_hooks_in_process_in_process.py' build_wheel 'C:\Users\DEVO\AppData\Local\Temp\tmpbkvrmjzt'
cwd: C:\Users\DEVO\AppData\Local\Temp\pip-install-v2c7qa2p\llama-cpp-python_6a239ebbad884300a54823bc225c4ef3
Building wheel for llama-cpp-python (pyproject.toml) ... error
ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
Failed to install llama-cpp-python. Error: Command '['C:\Users\DEVO\AI_C\Linguflex\test_env\Scripts\python.exe', '-m', 'pip', 'install', 'llama-cpp-python', '--force-reinstall', '--upgrade', '--no-cache-dir', '--verbose']' returned non-zero exit status 1.
You may need to copy MSBuildExtensions files for CUDA 11.8.
Copy all four MSBuildExtensions files from:
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\extras\visual_studio_integration\MSBuildExtensions
to
C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\MSBuild\Microsoft\VC\v170\BuildCustomizations
before restarting the installation script or manually executing the following command:
pip install llama-cpp-python --force-reinstall --upgrade --no-cache-dir --verbose
Do you want to continue without a verified installation of llama-cpp-python? (yes/no):
Do you want to try anyway? (yes/no):

Setting numpy version ...
Successfully installed numpy==1.23.5
Traceback (most recent call last):
File "C:\Users\DEVO\AI_C\Linguflex\download_models.py", line 6, in
from huggingface_hub import hf_hub_download
ModuleNotFoundError: No module named 'huggingface_hub'

@KoljaB
Copy link
Owner

KoljaB commented Apr 25, 2024

Hey Tom,

Thank you for reporting these issues and sorry for the inconveniences. As you've experienced, this is a large project that integrates several complex libraries, some of which have specific and sometimes challenging installation requirements.

Here's a breakdown of the problems and suggested fixes:

  1. Dependency Conflicts:
    Can be ignored.

  2. Installation of DeepSpeed:
    Installing DeepSpeed via a precompiled wheel failed.
    Maybe wrong wheel for your environment, please look for a more suitable one:
    Deepspeed Windows Wheels

    If this does not work, you may need to compile it yourself (DeepSpeed is not officially supported on Windows for Python versions higher than 3.9). More details (also to manual compilation):
    DeepSpeed GitHub Repository

  3. Building wheel for llama-cpp-python Failed:
    This issue may be related to missing MSBuildExtensions. You should copy all necessary files from:
    C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\extras\visual_studio_integration\MSBuildExtensions
    to:
    C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\MSBuild\Microsoft\VC\v170\BuildCustomizations
    I'll add a hint to the installation.doc.

  4. ModuleNotFoundError for huggingface_hub:
    Please run the following command in your virtual environment:

    pip install huggingface_hub
    

    The ModuleNotFoundError should not occur anymore after that. I will update the installation script to prevent this issue in the future.

Sorry again for the problems and missing hints in the installation doc. I didn't get much feedback about problems during installation so far, I'll update the guides and optimize the install code as good as I can.

@KoljaB
Copy link
Owner

KoljaB commented Apr 25, 2024

I also probably add an option to install Linguflex completely without deepspeed. I think this is the most challening library to install and the benefits we get from it are quite neglectable, it will work good enough without it.

@KoljaB
Copy link
Owner

KoljaB commented Apr 25, 2024

If you can't get deepspeed installed (which as I mentioned would not be your fault, it's hard) you can disable it in the settings.

Open settings.yaml and in the speech section set the parameter coqui_use_deepspeed to False.

@tombinary07
Copy link
Author

Hi, I compiled and Installed deepspeed wheel for a specfic Python Version and now No Errorrs when installing LinguFlex see: microsoft/DeepSpeed#4729

still have to check llama-cpp-python.
Great work Thanks

@Saphirah
Copy link

Saphirah commented May 3, 2024

Installing required deepspeed ...
Failed to install https://github.com/daswer123/deepspeed-windows/releases/download/11.2/deepspeed-0.11.2+cuda118-cp310-cp310-win_amd64.whl. Error: DEPRECATION: omegaconf 2.0.6 has a non-standard dependency specifier PyYAML>=5.1.*. pip 24.1 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of omegaconf or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at https://github.com/pypa/pip/issues/12063

This actually installs deepspeed successfully, but because of the DEPRECATION error the install script detects it as failed.
When running pip list, deepspeed is shown and I was able to run the program without problems, even with this error.
Just type "yes" when it asks you to continue.

LLama Error was fixed by moving the files, thanks @KoljaB

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants