Skip to content

Latest commit

 

History

History
86 lines (67 loc) · 5.62 KB

README.md

File metadata and controls

86 lines (67 loc) · 5.62 KB

DragGAN (SIGGRAPH'2023)

Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold

Task: DragGAN

Abstract

Synthesizing visual content that meets users’ needs often requires flexible and precise controllability of the pose, shape, expression, and layout of the generated objects. Existing approaches gain controllability of generative adversarial networks (GANs) via manually annotated training data or a prior 3D model, which often lack flexibility, precision, and generality. In this work, we study a powerful yet much less explored way of controlling GANs, that is, to "drag" any points of the image to precisely reach target points in a user-interactive manner, as shown in Fig.1. To achieve this, we propose DragGAN, which consists of two main components: 1) a feature-based motion supervision that drives the handle point to move towards the target position, and 2) a new point tracking approach that leverages the discriminative generator features to keep localizing the position of the handle points. Through DragGAN, anyone can deform an image with precise control over where pixels go, thus manipulating the pose, shape, expression,and layout of diverse categories such as animals, cars, humans, landscapes, etc. As these manipulations are performed on the learned generative image manifold of a GAN, they tend to produce realistic outputs even for challenging scenarios such as hallucinating occluded content and deforming shapes that consistently follow the object’s rigidity. Both qualitative and quantitative comparisons demonstrate the advantage of DragGAN over prior approaches in the tasks of image manipulation and point tracking. We also showcase the manipulation of real images through GAN inversion.

Results and Models

Gradio Demo of DragGAN StyleGAN2-elephants-512 by MMagic
Model Dataset Comment FID50k Precision50k Recall50k Download
stylegan2_lion_512x512 Internet Lions self-distilled StyleGAN 0.0 0.0 0.0 model
stylegan2_elphants_512x512 Internet Elephants self-distilled StyleGAN 0.0 0.0 0.0 model
stylegan2_cats_512x512 Cat AFHQ self-distilled StyleGAN 0.0 0.0 0.0 model
stylegan2_face_512x512 FFHQ self-distilled StyleGAN 0.0 0.0 0.0 model
stylegan2_horse_256x256 LSUN-Horse self-distilled StyleGAN 0.0 0.0 0.0 model
stylegan2_dogs_1024x1024 Internet Dogs self-distilled StyleGAN 0.0 0.0 0.0 model
stylegan2_car_512x512 Car transfer from official training 0.0 0.0 0.0 model
stylegan2_cat_256x256 Cat transfer from official training 0.0 0.0 0.0 model

Demo

To run DragGAN demo, please follow these two steps:

First, put your checkpoint path in ./checkpoints, e.g. ./checkpoints/stylegan2_lions_512_pytorch_mmagic.pth. To be specific,

mkdir checkpoints
cd checkpoints
wget -O stylegan2_lions_512_pytorch_mmagic.pth https://download.openxlab.org.cn/models/qsun1/DragGAN-StyleGAN2-checkpoint/weight//StyleGAN2-Lions-internet

Then, try on the script:

python demo/gradio_draggan.py

Citation

@inproceedings{pan2023drag,
  title={Drag your gan: Interactive point-based manipulation on the generative image manifold},
  author={Pan, Xingang and Tewari, Ayush and Leimk{\"u}hler, Thomas and Liu, Lingjie and Meka, Abhimitra and Theobalt, Christian},
  booktitle={ACM SIGGRAPH 2023 Conference Proceedings},
  pages={1--11},
  year={2023}
}