Skip to content

LeslieZhoa/GFPGAN-1024

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GFPGAN-1024

You can train finetune your GFPGAN-1024 model with your own dataset! inputs:512 -> outputs:1024

!!News!!

You can use my model to train. It contains everything!

My results

original | gfpgan | gfgan-1024

ENVIRONMENT

pip install -r requirements.txt

DATASET

  1. prepare ffhq-1024 data
  2. Collect your own pictures and align
  3. Do image supersession through open APIs like here
  4. get facial landmark to enhance eyes and mouth
    1. git clone git@github.com:LeslieZhoa/LVT.git
    2. download model
    3. change LVT file
    4. change landmark model file
    5. change image root
    6. change save root
    7. run cd process; python get_roi.py

Download

refer GFPGAN to download

  1. GFPGANv1.4.pth
  2. GFPGANv1_net_d_left_eye.pth
  3. GFPGANv1_net_d_mouth.pth
  4. GFPGANv1_net_d_right_eye.pth
  5. arcface_resnet18.pth
  6. get vgg model here
  7. get discriminator here which is transformed from original stylegan2

put these model into pretrained_models

Train

config change

change dataset path in model/config.py

self.img_root -> ffhq data root
self.train_hq_root -> your own 1024 data root
self.train_lq_root -> your own lq data root
self.train_lmk_base -> train lmk by get_roi.py
self.val_lmk_base -> val lmk by get_roi.py
self.val_lq_root -> val lq data root
self.val_hq_root -> val hq data root

stage 1: train decoder

set self.mode = 'decoder' in model/config.py
train util you think it is ok.

stage 2: train encoder

set self.mode = 'encoder' and self.pretrain_path from stage 1 in model/config.py
train util you think it is ok.

stage 3: train all net

set self.mode = 'encoder' and self.pretrain_path from stage 2 in model/config.py
use early stop.

run the code

stage 1 && stage 2 -> python train.py --batch_size 2 --scratch --dist
stage 3 -> python train.py --batch_size 2 --early_stop --dist
Support multi node multi gpus training

convert torch script model

Can multi batch
python utils/convert_pt.py