Skip to content

LIAGM/DAEFR

Repository files navigation

News ✨ ✨

  • [2024-01-16] Our paper is accepted by ICLR 2024.
  • [2023-12-26] Create project.

DAEFR

This repo includes the source code of the paper: "Dual Associated Encoder for Face Restoration" by Yu-Ju Tsai, Yu-Lun Liu, Lu Qi, Kelvin C.K. Chan, and Ming-Hsuan Yang.

We propose a novel dual-branch framework named DAEFR. Our method introduces an auxiliary LQ branch that extracts crucial information from the LQ inputs. Additionally, we incorporate association training to promote effective synergy between the two branches, enhancing code prediction and output quality. We evaluate the effectiveness of DAEFR on both synthetic and real-world datasets, demonstrating its superior performance in restoring facial details.

Environment

  • python>=3.7
  • pytorch>=1.7.1
  • pytorch-lightning==1.0.8
  • omegaconf==2.0.0
  • basicsr==1.3.3.4

You can also set up the environment by the following command:

conda env create -f environment.yml
conda activate DAEFR

Warning Different versions of pytorch-lightning and omegaconf may lead to errors or different results.

Preparations of dataset and models

Training Dataset:

  • Training data: HQ Codebook, LQ Codebook and DAEFR are trained with FFHQ which attained from FFHQ repository.
  • The original size of the images in FFHQ are 1024x1024. We resize them to 512x512 with bilinear interpolation in our work.
  • We provide our resized 512x512 FFHQ on HuggingFace. Link this 512x512 version dataset to ./datasets/FFHQ/image512x512.

Testing Dataset:

Datasets Short Description Download DAEFR results
CelebA-Test (HQ) 3000 (HQ) ground truth images for evaluation celeba_512_validation.zip None
CelebA-Test (LQ) 3000 (LQ) synthetic images for testing self_celeba_512_v2.zip Link
LFW-Test (LQ) 1711 real-world images for testing lfw_cropped_faces.zip Link
WIDER-Test (LQ) 970 real-world images for testing Wider-Test.zip Link

Model: Pretrained models used for training and the trained model of our DAEFR can be attained from HuggingFace. Link these models to ./experiments.

Make sure the models are stored as follows:

experiments/
|-- HQ_codebook.ckpt
|-- LQ_codebook.ckpt
|-- Association_stage.ckpt
|-- DAEFR_model.ckpt
|-- pretrained_models/
    |-- FFHQ_eye_mouth_landmarks_512.pth
    |-- arcface_resnet18.pth    
    |-- inception_FFHQ_512-f7b384ab.pth
    |-- lpips/
        |-- vgg.pth

Test

sh scripts/test.sh

Or you can use the following command for testing:

CUDA_VISIBLE_DEVICES=$GPU python -u scripts/test.py \
--outdir $outdir \
-r $checkpoint \
-c $config \
--test_path $align_test_path \
--aligned

Training

First stage for codebooks

sh scripts/run_HQ_codebook_training.sh
sh scripts/run_LQ_codebook_training.sh

Second stage for Association

sh scripts/run_association_stage_training.sh

Final stage for DAEFR

sh scripts/run_DAEFR_training.sh

Note.

  • Please modify the related paths to your own.
  • The second stage is for model association. You need to add your trained HQ_Codebook and LQ_Codebook model to ckpt_path_HQ and ckpt_path_LQ in config/Association_stage.yaml.
  • The final stage is for face restoration. You need to add your trained HQ_Codebook and Association model to ckpt_path_HQ and ckpt_path_LQ in config/DAEFR.yaml.
  • Our model is trained with 8 A100 40GB GPUs with batchsize 4.
sh scripts/metrics/run.sh

Note.

  • You need to add the path of CelebA-Test dataset in the script if you want get IDA, PSNR, SSIM, LPIPS. You also need to modify the name of restored folders for evaluation.
  • For LMD and NIQE, we use the evaluation code from VQFR. Please refer to their repo for more details.

Citation

@article{tsai2023dual,
    title={Dual Associated Encoder for Face Restoration},
    author={Tsai, Yu-Ju and Liu, Yu-Lun and Qi, Lu and Chan, Kelvin CK and Yang, Ming-Hsuan},
    journal={arXiv preprint arXiv:2308.07314},
    year={2023}
}

Acknowledgement

We thank everyone who makes their code and models available, especially Taming Transformer, basicsr, RestoreFormer, CodeFormer, and VQFR.

Contact

For any question, feel free to email louis19950117@gmail.com.

About

[ICLR 2024] DAEFR: Dual Associated Encoder for Face Restoration

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published