Skip to content

Students Project at the Technion for generating natural looking images from a single image, using deep features of VGG19 and a hierarchical architecture based on SinGAN

Notifications You must be signed in to change notification settings

HilaManor/Generative-deep-features

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Python 3.8.5 torch torchvision

Generative deep features

image

In recent years more and more GANs are being trained to solve the problem of image generation, since they offer stunning visual quality. One of the bigger disadvantages of GANs is the need to use large datasets to train on, which aren't easily available for every purpose. SinGAN[1] was introduced as a model that combats this disadvantage, by training on a single image alone, by using a multi-scale GANs architecture.

In parallel to that, different papers published in the last couple of years have already established the connection between the deep features of classification networks and the semantic content of images, such that we can define the visual content of an image by the statistics of its deep features.

The goal of this students project is to research the capability of generating a completely new image with the same visual content of a single given natural image, by using unsupervised learning of a deep neural network without the use of a GAN. Instead, we choose to use distribution loss functions over the deep features (The outputs of VGG19's feature maps) of the generated and original images. Using a similar pyramidal structure to SinGAN, we succeeded in creating realistic and variable images in different sizes and aspects, that maintain both the global structure and the visual content of the source image, by using a single image, without using an adversarial loss. We also apply the method on several additional applications, taken from the image manipulation tasks of the original paper.

Table of Contents

Requirements

The code was tested on the following libraries:

  • imageio 2.9.0
  • matplotlib 3.3.4
  • numpy 1.19.5
  • pytorch 1.8.1
  • scikit-image 0.18.1
  • scikit-learn 0.24.1
  • scipy 1.6.1
  • torch 1.7.1+cu110
  • torchvision 0.8.2+cu110

Usage Example

Training a model (of a single image)

python main.py --image_path <input_image_path>

Sample outputs will automatically be generated at each scale and at the end of the training. use --help for more information on the parameters.

Applications

You can find in the code folder multiple scripts for generation of applications, namely:

├── code
│   ├── edges.py
│   ├── animation.py
│   ├── scaled_sample.py
│   ├── harmonization.py
│   ├── paint_to_image.py

all will require 2 parameters --image_path for the original image the model was trained on, and --train_net_dir for the path to the trained model folder. The output is located inside the trained model directory, given to the script, under a suiting name (e.g., <train_net_dir>/Harmonization for the harmonization.py script) Each test has its own parameters, refer to the --help of each script. The relevant arguments always appear on top of the help page as optional arguments.

Team

Hila Manor and Da-El Klang
Supervised by Tamar Rott-Shaham

Examples

All examples were generated using PDL as the distribution loss function image birds parameters: min_size=21, a=25, a_color=0 mountains parameters: min_size=19, a=35, a_color=3 colosseum parameters: min_size=19, a=10, a_color=1

image trees parameters: min_size=19, a=35, a_color=0 cows parameters: min_size=19, a=10, a_color=1

image starry night parameters: injection at 7th scale out of 11, min_size=21, a=25, a_color=3 tree parameters: injection at 9th scale oout of 12, min_size=19, a=35, a_color=1

image parameters: min_size=19, a=35, a_color=1

Animation from a single image example

Sources

About

Students Project at the Technion for generating natural looking images from a single image, using deep features of VGG19 and a hierarchical architecture based on SinGAN

Topics

Resources

Stars

Watchers

Forks

Languages