Skip to content

[CVPR 2023] Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation

License

Notifications You must be signed in to change notification settings

Advocate99/DiffGesture

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation (CVPR 2023)

This is the official code for Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation.

Abstract

Animating virtual avatars to make co-speech gestures facilitates various applications in human-machine interaction. The existing methods mainly rely on generative adversarial networks (GANs), which typically suffer from notorious mode collapse and unstable training, thus making it difficult to learn accurate audio-gesture joint distributions. In this work, we propose a novel diffusion-based framework, named Diffusion Co-Speech Gesture (DiffGesture), to effectively capture the cross-modal audio-to-gesture associations and preserve temporal coherence for high-fidelity audio-driven co-speech gesture generation. Specifically, we first establish the diffusion-conditional generation process on clips of skeleton sequences and audio to enable the whole framework. Then, a novel Diffusion Audio-Gesture Transformer is devised to better attend to the information from multiple modalities and model the long-term temporal dependency. Moreover, to eliminate temporal inconsistency, we propose an effective Diffusion Gesture Stabilizer with an annealed noise sampling strategy. Benefiting from the architectural advantages of diffusion models, we further incorporate implicit classifier-free guidance to trade off between diversity and gesture quality. Extensive experiments demonstrate that DiffGesture achieves state-of-the-art performance, which renders coherent gestures with better mode coverage and stronger audio correlations.

Installation & Preparation

  1. Clone this repository and install packages:

    git clone https://github.com/Advocate99/DiffGesture.git
    pip install -r requirements.txt
    
  2. Download pretrained fasttext model from here and put crawl-300d-2M-subword.bin and crawl-300d-2M-subword.vec at data/fasttext/.

  3. Download the autoencoder used for FGD which include the following:

    For the TED Gesture Dataset, we use the pretrained Auto-Encoder model provided by Yoon et al. for better reproducibility the ckpt in the train_h36m_gesture_autoencoder folder.

    For the TED Expressive Dataset, the pretrained Auto-Encoder model is provided here.

    Save the models in output/train_h36m_gesture_autoencoder/gesture_autoencoder_checkpoint_best.bin for TED Gesture, and output/TED_Expressive_output/AE-cos1e-3/checkpoint_best.bin for TED Expressive.

  4. Refer to HA2G to download the two datasets.

  5. The pretrained models can be found here.

Training

While the test metrics may vary slightly, overall, the training procedure with the given config files tends to yield similar performance results and normally outperforms all the comparison methods.

python scripts/train_ted.py --config=config/pose_diffusion_ted.yml
python scripts/train_expressive.py --config=config/pose_diffusion_expressive.yml

Inference

# synthesize short videos
python scripts/test_ted.py short
python scripts/test_expressive.py short

# synthesize long videos
python scripts/test_ted.py long
python scripts/test_expressive.py long

# metrics evaluation
python scripts/test_ted.py eval
python scripts/test_expressive.py eval

Citation

If you find our work useful, please kindly cite as:

@inproceedings{zhu2023taming,
  title={Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation},
  author={Zhu, Lingting and Liu, Xian and Liu, Xuanyu and Qian, Rui and Liu, Ziwei and Yu, Lequan},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={10544--10553},
  year={2023}
}

Related Links

If you are interested in Audio-Driven Co-Speech Gesture Generation, we would also like to recommend you to check out our other related works:

  • Hierarchical Audio-to-Gesture, HA2G.

  • Audio-Driven Co-Speech Gesture Video Generation, ANGIE.

Acknowledgement