Skip to content

Live real-time avatars from your webcam in the browser. No dedicated hardware or software installation needed. A pure Google Colab wrapper for live First-order-motion-model, aka Avatarify in the browser. And other Colabs providing an accessible interface for using FOMM, Wav2Lip and Liquid-warping-GAN with your own media and a rich GUI.

License

eyaler/avatars4all

Repository files navigation

avatars4all

Live real-time avatars from your webcam in the browser. No dedicated hardware or software installation needed. A pure Google Colab wrapper for live First-order-motion-model, aka Avatarify in the browser. And other Colabs providing an accessible interface for using FOMM, Wav2Lip and Liquid-warping-GAN with your own media and a rich GUI.

Based on the works:

  1. First Order Motion Model for Image Animation, https://aliaksandrsiarohin.github.io/first-order-model-website
  2. Avatarify, https://github.com/alievk/avatarify
  3. Webcam for Google Colab over Websocket, https://github.com/a2kiti/webCamGoogleColab
  4. Wav2Lip, Accurately Lip-sync Videos to Any Speech, http://bhaasha.iiit.ac.in/lipsync
  5. Liquid Warping GAN (Impersonator), https://svip-lab.github.io/project/impersonator
  6. Liquid Warping GAN (Impersonator++), https://www.impersonator.org/work/impersonator-plus-plus.html
  7. pyAudioAnalysis, https://github.com/tyiannak/pyAudioAnalysis
  8. pyannote-audio, https://github.com/pyannote/pyannote-audio
  9. U^2-Net, https://github.com/NathanUA/U-2-Net
  10. MODNet, https://github.com/ZHKKKe/MODNet

In this repository you will find:

  1. Colab for live real-time talking head deep-fakes from your webcam. (j.mp/cam2head)
  2. Colab for creating talking head deep-fakes (VoxCeleb model) from YouTube or other videos. (j.mp/vid2head)
  3. Colab for creating full body deep-fakes (Tai chi and fashion models) from YouTube or other videos. (j.mp/vid2body)
  4. Colab for creating full body deep-fakes (impersonator model) from YouTube or other videos. (j.mp/vid2act)
  5. Colab for creating full body deep-fakes (impersonator++ model) from YouTube or other videos. (j.mp/vid2warp)
  6. Colab for creating lip sync deep-fakes based on audio. (j.mp/wav2lip)
  7. Colab Green screen effect for video with optional background video, and sketch, bokeh and more effects. (j.mp/vid2green))

Features:

  1. The fastest purely online solution I am aware of for live real-time first-order-motion-model avatars from your webcam.
  2. A new auto-calibration mode that works in real-time!
  3. A new exaggeration factor to get those damn muppets to open their mouths!
  4. Drag and drop local/web images on the GUI to upload new avatars!
  5. Options to switch between avatars, including newly generated StyleGAN faces, as inspired by Avatarify, of:
  1. Smart auto-pad/crop/resize to the head or body, for images and for offline videos, tuned for best results.
  2. Full control of model parameters as well as zoom and buffering options in the GUI.
  3. Upload your own images and videos or pull them from the web including from YouTube, etc., and optionally trim videos.
  4. Visualization of facial landmarks and their alignment between source and target.
  5. Download videos with original audio and framerate, and optimized for compatibility.
  6. One click operation with Runtime -> Run all.
  7. Optional Wav2Lip post processing following head animation.
  8. Combining Wav2Lip with speaker diarization for automatic animated skit creation from audio ("Wav2Skit").

Reference implementations:

  1. https://colab.research.google.com/github/AliaksandrSiarohin/first-order-model/blob/master/demo.ipynb
  2. https://colab.research.google.com/github/tg-bomze/Face-Image-Motion-Model/blob/master/Face_Image_Motion_Model_(Photo_2_Video)_Eng.ipynb
  3. https://colab.research.google.com/github/alievk/avatarify/blob/master/avatarify.ipynb
  4. https://colab.research.google.com/github/a2kiti/webCamGoogleColab/blob/master/webCamGoogleColab_websocketVersion.ipynb
  5. https://colab.research.google.com/github/thefonseca/colabrtc/blob/master/examples/colabrtc.ipynb
  6. https://github.com/l4rz/first-order-model/tree/master/webrtc
  7. https://gist.github.com/myagues/aac0c597f8ad0fa7ebe7d017b0c5603b
  8. https://colab.research.google.com/drive/1tZpDWXz49W6wDcTprANRGLo2D_EbD5J8
  9. https://colab.research.google.com/github/svip-lab/impersonator/blob/master/impersonator.ipynb
  10. https://colab.research.google.com/drive/1bwUnj-9NnJA2EMr7eWO4I45UuBtKudg_
  11. https://terryky.github.io/tfjs_webgl_app/face_landmark
  12. https://eyaler.github.io/tfjs_webgl_app/face_landmark

Workshops, tutorials and talks

About

Live real-time avatars from your webcam in the browser. No dedicated hardware or software installation needed. A pure Google Colab wrapper for live First-order-motion-model, aka Avatarify in the browser. And other Colabs providing an accessible interface for using FOMM, Wav2Lip and Liquid-warping-GAN with your own media and a rich GUI.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Sponsor this project

 

Packages

No packages published