Skip to content

Latest commit

 

History

History
58 lines (42 loc) · 2.18 KB

README.md

File metadata and controls

58 lines (42 loc) · 2.18 KB

EDTalk: Efficient Disentanglement for Emotional Talking Head Synthesis

The official repository of the paper EDTalk: Efficient Disentanglement for Emotional Talking Head Synthesis

Paper | Project Page | Code

Given an identity source, EDTalk synthesizes talking face videos characterized by mouth shapes, head poses, and expressions consistent with mouth GT, pose source and expression source. These facial dynamics can also be inferred directly from driven audio. Importantly, EDTalk demonstrates superior efficiency in disentanglement training compared to other methods.

TODO

  • Release Arxiv paper.
  • Release code. (Once the paper is accepted)
  • Release Pre-trained Model. (Once the paper is accepted)

Citation

@article{tan2024edtalk,
  title={EDTalk: Efficient Disentanglement for Emotional Talking Head Synthesis},
  author={Tan, Shuai and Ji, Bin and Bi, Mengxiao and Pan, Ye},
  journal={arXiv preprint arXiv:2404.01647},
  year={2024}
}

Acknowledgement

Some figures in the paper is inspired by:

The README.md template is borrowed from SyncTalk

Thanks for these great projects.