You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I was trying to install this repository for 2 days, but I couldn't figure out how to do that and I tried all the PyTorch versions starting from 1.9.0 and up to 2.2.1. If anyone had a similar problem I would really appreciate if you could share your insights on how to install video-retalking locally.
On my laptop I have Nvidia drivers version 535.161.07 installed (witch supports CUDA version up to 12.2).
I'm running Ubuntu 22.04.4 LTS.
I have Nvidia Quadro RTX 5000 mobile with 16 GB of VRAM. I have 64 GB normal RAM.
Drivers are installed properly as I can normally run Automatic1111 and ComfyUI in separate Conda environments.
I was installing this repository using the original documentation without any change. I.e.:
git clone https://github.com/vinthony/video-retalking.git
cd video-retalking
conda create -n video_retalking python=3.8
conda activate video_retalking
conda install ffmpeg
# Please follow the instructions from https://pytorch.org/get-started/previous-versions/
# This installation command only works on CUDA 11.1
pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html
pip install -r requirements.txt
After that I downloaded all checkpoints and placed them into the checkpoints folder as it is said in README.md.
Running the inference with the following commands:
However, I quickly received the following error saying that my CUDA_HOME is not specified:
Traceback (most recent call last):
File "inference.py", line 16, in <module>
from third_part.GPEN.gpen_face_enhancer import FaceEnhancement
File "/srv/shared/AI/AUTOMATIC1111/video-retalking-fork3/third_part/GPEN/gpen_face_enhancer.py", line 8, in <module>
from face_model.face_gan import FaceGAN
File "/srv/shared/AI/AUTOMATIC1111/video-retalking-fork3/third_part/GPEN/face_model/face_gan.py", line 13, in <module>
from face_model.gpen_model import FullGenerator
File "/srv/shared/AI/AUTOMATIC1111/video-retalking-fork3/third_part/GPEN/face_model/gpen_model.py", line 16, in <module>
from face_model.op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d
File "/srv/shared/AI/AUTOMATIC1111/video-retalking-fork3/third_part/GPEN/face_model/op/__init__.py", line 1, in <module>
from .fused_act import FusedLeakyReLU, fused_leaky_relu
File "/srv/shared/AI/AUTOMATIC1111/video-retalking-fork3/third_part/GPEN/face_model/op/fused_act.py", line 13, in <module>
fused = load(
File "/home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1080, in load
return _jit_compile(
File "/home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1293, in _jit_compile
_write_ninja_file_and_build_library(
File "/home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1382, in _write_ninja_file_and_build_library
extra_ldflags = _prepare_ldflags(
File "/home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1490, in _prepare_ldflags
extra_ldflags.append(f'-L{_join_cuda_home("lib64")}')
File "/home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1984, in _join_cuda_home
raise EnvironmentError('CUDA_HOME environment variable is not set. '
OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root.
Thus, I specified CUDA_HOME as following: export CUDA_HOME=$CONDA_PREFIX.
Running the inference again after that throws another exception, but I can't figure out how to fix it yet.
Traceback (most recent call last):
File "/home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1666, in _run_ninja_build
subprocess.run(
File "/home/user/miniconda3/envs/video_retalking/lib/python3.8/subprocess.py", line 516, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "inference.py", line 16, in <module>
from third_part.GPEN.gpen_face_enhancer import FaceEnhancement
File "/srv/shared/AI/AUTOMATIC1111/video-retalking-fork3/third_part/GPEN/gpen_face_enhancer.py", line 8, in <module>
from face_model.face_gan import FaceGAN
File "/srv/shared/AI/AUTOMATIC1111/video-retalking-fork3/third_part/GPEN/face_model/face_gan.py", line 13, in <module>
from face_model.gpen_model import FullGenerator
File "/srv/shared/AI/AUTOMATIC1111/video-retalking-fork3/third_part/GPEN/face_model/gpen_model.py", line 16, in <module>
from face_model.op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d
File "/srv/shared/AI/AUTOMATIC1111/video-retalking-fork3/third_part/GPEN/face_model/op/__init__.py", line 1, in <module>
from .fused_act import FusedLeakyReLU, fused_leaky_relu
File "/srv/shared/AI/AUTOMATIC1111/video-retalking-fork3/third_part/GPEN/face_model/op/fused_act.py", line 13, in <module>
fused = load(
File "/home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1080, in load
return _jit_compile(
File "/home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1293, in _jit_compile
_write_ninja_file_and_build_library(
File "/home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1405, in _write_ninja_file_and_build_library
_run_ninja_build(
File "/home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1682, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error building extension 'fused': [1/3] /home/user/miniconda3/envs/video_retalking/bin/nvcc -DTORCH_EXTENSION_NAME=fused -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include -isystem /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/TH -isystem /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/THC -isystem /home/user/miniconda3/envs/video_retalking/include -isystem /home/user/miniconda3/envs/video_retalking/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 --compiler-options '-fPIC' -std=c++14 -c /srv/shared/AI/AUTOMATIC1111/video-retalking-fork3/third_part/GPEN/face_model/op/fused_bias_act_kernel.cu -o fused_bias_act_kernel.cuda.o
FAILED: fused_bias_act_kernel.cuda.o
/home/user/miniconda3/envs/video_retalking/bin/nvcc -DTORCH_EXTENSION_NAME=fused -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include -isystem /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/TH -isystem /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/THC -isystem /home/user/miniconda3/envs/video_retalking/include -isystem /home/user/miniconda3/envs/video_retalking/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 --compiler-options '-fPIC' -std=c++14 -c /srv/shared/AI/AUTOMATIC1111/video-retalking-fork3/third_part/GPEN/face_model/op/fused_bias_act_kernel.cu -o fused_bias_act_kernel.cuda.o
/bin/sh: 1: /home/user/miniconda3/envs/video_retalking/bin/nvcc: not found
[2/3] c++ -MMD -MF fused_bias_act.o.d -DTORCH_EXTENSION_NAME=fused -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include -isystem /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/TH -isystem /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/THC -isystem /home/user/miniconda3/envs/video_retalking/include -isystem /home/user/miniconda3/envs/video_retalking/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -c /srv/shared/AI/AUTOMATIC1111/video-retalking-fork3/third_part/GPEN/face_model/op/fused_bias_act.cpp -o fused_bias_act.o
In file included from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/c10/core/DeviceType.h:8,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/c10/core/Device.h:3,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/c10/core/Allocator.h:6,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/ATen/ATen.h:7,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/extension.h:4,
from /srv/shared/AI/AUTOMATIC1111/video-retalking-fork3/third_part/GPEN/face_model/op/fused_bias_act.cpp:1:
/srv/shared/AI/AUTOMATIC1111/video-retalking-fork3/third_part/GPEN/face_model/op/fused_bias_act.cpp: In function ‘at::Tensor fused_bias_act(const at::Tensor&, const at::Tensor&, const at::Tensor&, int, int, float, float)’:
/srv/shared/AI/AUTOMATIC1111/video-retalking-fork3/third_part/GPEN/face_model/op/fused_bias_act.cpp:7:41: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
7 | #define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
| ^
/srv/shared/AI/AUTOMATIC1111/video-retalking-fork3/third_part/GPEN/face_model/op/fused_bias_act.cpp:13:5: note: in expansion of macro ‘CHECK_CUDA’
13 | CHECK_CUDA(input);
| ^~~~~~~~~~
In file included from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/ATen/Tensor.h:3,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/ATen/Context.h:4,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/ATen/ATen.h:9,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/extension.h:4,
from /srv/shared/AI/AUTOMATIC1111/video-retalking-fork3/third_part/GPEN/face_model/op/fused_bias_act.cpp:1:
/home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:338:30: note: declared here
338 | DeprecatedTypeProperties & type() const {
| ^~~~
In file included from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/c10/core/DeviceType.h:8,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/c10/core/Device.h:3,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/c10/core/Allocator.h:6,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/ATen/ATen.h:7,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/extension.h:4,
from /srv/shared/AI/AUTOMATIC1111/video-retalking-fork3/third_part/GPEN/face_model/op/fused_bias_act.cpp:1:
/srv/shared/AI/AUTOMATIC1111/video-retalking-fork3/third_part/GPEN/face_model/op/fused_bias_act.cpp:7:41: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
7 | #define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
| ^
/srv/shared/AI/AUTOMATIC1111/video-retalking-fork3/third_part/GPEN/face_model/op/fused_bias_act.cpp:14:5: note: in expansion of macro ‘CHECK_CUDA’
14 | CHECK_CUDA(bias);
| ^~~~~~~~~~
In file included from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/ATen/Tensor.h:3,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/ATen/Context.h:4,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/ATen/ATen.h:9,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
from /home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/torch/extension.h:4,
from /srv/shared/AI/AUTOMATIC1111/video-retalking-fork3/third_part/GPEN/face_model/op/fused_bias_act.cpp:1:
/home/user/miniconda3/envs/video_retalking/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:338:30: note: declared here
338 | DeprecatedTypeProperties & type() const {
| ^~~~
ninja: build stopped: subcommand failed.
I tried the same steps but installing different PyTorch version. However, it didn't fix this problem.
If anyone had a similar issue or has any ideas on what it could be or any workaround to this problem, I would really appretiate if you could share your thoughts about it.
The text was updated successfully, but these errors were encountered:
Hello, I was trying to install this repository for 2 days, but I couldn't figure out how to do that and I tried all the PyTorch versions starting from 1.9.0 and up to 2.2.1. If anyone had a similar problem I would really appreciate if you could share your insights on how to install video-retalking locally.
On my laptop I have Nvidia drivers version 535.161.07 installed (witch supports CUDA version up to 12.2).
I'm running Ubuntu 22.04.4 LTS.
I have Nvidia Quadro RTX 5000 mobile with 16 GB of VRAM. I have 64 GB normal RAM.
Drivers are installed properly as I can normally run Automatic1111 and ComfyUI in separate Conda environments.
I was installing this repository using the original documentation without any change. I.e.:
After that I downloaded all checkpoints and placed them into the
checkpoints
folder as it is said in README.md.Running the inference with the following commands:
However, I quickly received the following error saying that my CUDA_HOME is not specified:
Thus, I specified
CUDA_HOME
as following:export CUDA_HOME=$CONDA_PREFIX
.Running the inference again after that throws another exception, but I can't figure out how to fix it yet.
I tried the same steps but installing different PyTorch version. However, it didn't fix this problem.
If anyone had a similar issue or has any ideas on what it could be or any workaround to this problem, I would really appretiate if you could share your thoughts about it.
The text was updated successfully, but these errors were encountered: