Skip to content
This repository has been archived by the owner on Jan 29, 2024. It is now read-only.
/ StyleGAN.pytorch Public archive

A PyTorch implementation for StyleGAN with full features.

License

Notifications You must be signed in to change notification settings

huangzh13/StyleGAN.pytorch

Repository files navigation

StyleGAN.pytorch

[:star: New :star:] Please head over to Official PyTorch implementation.


[ChineseGirl Dataset]

This repository contains the unofficial PyTorch implementation of the following paper:

A Style-Based Generator Architecture for Generative Adversarial Networks
Tero Karras (NVIDIA), Samuli Laine (NVIDIA), Timo Aila (NVIDIA)
http://stylegan.xyz/paper

Abstract: We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.

Features

  • Progressive Growing Training
  • Exponential Moving Average
  • Equalized Learning Rate
  • PixelNorm Layer
  • Minibatch Standard Deviation Layer
  • Style Mixing Regularization
  • Truncation Trick
  • Using official tensorflow pretrained weights
  • Gradient Clipping
  • Multi-GPU Training
  • FP-16 Support
  • Conditional GAN

How to use

Requirements

  • yacs
  • tqdm
  • numpy
  • torch
  • torchvision
  • tensorflow(Optional, for ./convert.py)

Running the training script:

Train from scratch:

python train.py --config configs/sample.yaml

Using trained model:

Resume training from a checkpoint (start form 128x128):

python train.py --config config/sample.yaml --start_depth 5 --generator_file [] [--gen_shadow_file] --discriminator_file [] --gen_optim_file [] --dis_optim_file []

Style Mixing

python generate_mixing_figure.py --config config/sample.yaml --generator_file [] 


Thanks to dataset provider:Copyright(c) 2018, seeprettyface.com, BUPT_GWY contributes the dataset.

Truncation trick

python generate_truncation_figure.py --config configs/sample_cari2_128_truncation.yaml --generator_file cari2_128_truncation_gen.pth


Convert from official format

python convert.py --config configs/sample_ffhq_1024.yaml --input_file PATH/karras2019stylegan-ffhq-1024x1024.pkl --output_file ffhq_1024_gen.pth

Generated samples


[FFHQ Dataset](128x128)

Using weights tranferred from official tensorflow repo.


[FFHQ Dataset](1024x1024)


[WebCaricatureDataset](128x128)

Reference

Thanks

Please feel free to open PRs / issues / suggestions here.

Due Credit

This code heavily uses NVIDIA's original StyleGAN code. We accredit and acknowledge their work here. The Original License is located in the base directory (file named LICENSE_ORIGINAL.txt).