Skip to content

Latest commit

 

History

History
80 lines (65 loc) · 4.62 KB

README-zh.md

File metadata and controls

80 lines (65 loc) · 4.62 KB

GeneFace++: Generalized and Stable Real-Time 3D Talking Face Generation

arXiv| GitHub Stars | downloads | visitors

English Readme

这个仓库是GeneFace++的官方PyTorch实现,用于实现高嘴形对齐(lip-sync)、高视频真实度(video reality)、高系统效率(system efficiency)的虚拟人视频合成。您可以访问我们的项目页面以观看Demo视频, 阅读我们的论文以了解技术细节。



您可能同样感兴趣

  • 我们发布了Real3D-portrait (ICLR 2024 Spotlight), (https://github.com/yerfor/Real3DPortrait), 一个基于NeRF的单图驱动说话人合成系统, 仅需上传一张照片即可合成真实的说话人视频!

快速上手!

我们在这里提供一个最快体验GeneFace++的流程。

  • 步骤1:根据我们在docs/prepare_env/install_guide.md中的步骤,新建一个名为geneface的Python环境,并下载所需的3DMM文件。

  • 步骤2:下载预处理好的May的数据集 trainval_dataset.npy (Google DriveBaiduYun Disk 提取码: 98n4), 放置在data/binary/videos/May/trainval_dataset.npy路径下。

  • 步骤3:下载预训练好的通用的audio-to-motino模型 audio2motion_vae.zip (Google DriveBaiduYun Disk 提取码: 98n4) 和专用于May的motion-to-video模型 motion2video_nerf.zip (Google DriveBaiduYun Disk 提取码: 98n4), 解压到./checkpoints/目录下。

做完上面的步骤后,您的 checkpointsdata 文件夹的结构应该是这样的:

> checkpoints
    > audio2motion_vae
    > motion2video_nerf
        > may_head
        > may_torso
> data
    > binary
        > videos
            > May
                trainval_dataset.npy
  • 步骤4: 激活geneface的Python环境,然后执行:
export PYTHONPATH=./
python inference/genefacepp_infer.py --a2m_ckpt=checkpoints/audio2motion_vae --head_ckpt= --torso_ckpt=checkpoints/motion2video_nerf/may_torso --drv_aud=data/raw/val_wavs/MacronSpeech.wav --out_name=may_demo.mp4
  • 或者可以使用我们提供的Gradio WebUI:
export PYTHONPATH=./
python inference/app_genefacepp.py --a2m_ckpt=checkpoints/audio2motion_vae --head_ckpt= --torso_ckpt=checkpoints/motion2video_nerf/may_torso
  • 抑或可以使用我们提供的Google Colab,运行其中的所有cell。

在自己的视频上训练GeneFace++

如果您想在您自己的目标人物视频上训练GeneFace++,请遵循 docs/process_datadocs/train_and_infer/中的步骤。

ToDo

  • Release Inference Code of Audio2Motion and Motion2Video.
  • Release Pre-trained weights of Audio2Motion and Motion2Video.
  • Release Training Code of Motino2Video Renderer.
  • Release Gradio Demo.
  • Release Google Colab.
  • Release Training Code of Audio2Motion and Post-Net.

引用

如果这个仓库对你有帮助,请考虑引用我们的工作:

@article{ye2023geneface,
  title={GeneFace: Generalized and High-Fidelity Audio-Driven 3D Talking Face Synthesis},
  author={Ye, Zhenhui and Jiang, Ziyue and Ren, Yi and Liu, Jinglin and He, Jinzheng and Zhao, Zhou},
  journal={arXiv preprint arXiv:2301.13430},
  year={2023}
}
@article{ye2023geneface++,
  title={GeneFace++: Generalized and Stable Real-Time Audio-Driven 3D Talking Face Generation},
  author={Ye, Zhenhui and He, Jinzheng and Jiang, Ziyue and Huang, Rongjie and Huang, Jiawei and Liu, Jinglin and Ren, Yi and Yin, Xiang and Ma, Zejun and Zhao, Zhou},
  journal={arXiv preprint arXiv:2305.00787},
  year={2023}
}