Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DCGAN can only generate noise images #252

Open
leeprinxin opened this issue Apr 6, 2021 · 5 comments
Open

DCGAN can only generate noise images #252

leeprinxin opened this issue Apr 6, 2021 · 5 comments

Comments

@leeprinxin
Copy link

leeprinxin commented Apr 6, 2021

I went through the tutorial (https://livebook.manning.com/book/gans-in-action/chapter-4/103) and tried to construct the DCGAN model.

I use the colab environment to run it. keras or tensorflow.keras: 2.4.3 or 2.4.0. tensorflow: 2.4.1

But after running it, the generator only comes up with noise images.

I tried replacing the optimiser with RMSporp and it also only produces noise images.
my code link:https://gist.github.com/leeprinxin/967ce5c24b163c68d13ec5305dea7207

@Ayan-Vishwakarma
Copy link

In the book, they did not actually explicitly wrote the learning rate. The typical learning rate for RmsProp is 0.0002 Or 0.00005 as seen in most of the papers. This may be one of the problem as dcgan require learning rate tuning.

@leeprinxin
Copy link
Author

In the book, they did not actually explicitly wrote the learning rate. The typical learning rate for RmsProp is 0.0002 Or 0.00005 as seen in most of the papers. This may be one of the problem as dcgan require learning rate tuning.

I finally solved this problem.
I have tried to downgrade keras to version 2.3.1 and it was working.
But I don't know why keras 2.4.3 is generating noise

@obh5pq
Copy link

obh5pq commented Apr 11, 2021

I encountered the same issue with InfoGAN

On the tensorflow page for BatchNormalization it says that there was a behavioral change between TF 1.x and 2

setting trainable = False on the layer means that the layer will be subsequently run in inference mode [...] This behavior only occurs as of TensorFlow 2.0. In 1.*, setting layer.trainable = False would freeze the layer but would not switch it to inference mode.

Changing the import statement for BatchNormalization to

from tensorflow.compat.v1.keras.layers import BatchNormalization

Seems to fix the issue and produces the output you'd expect (at least, in InfoGAN's case).

Note: To get the InfoGAN example script to run on the current TF build, the import statements needed to be changed to

from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import Input, Dense, Reshape, Flatten, Dropout
from tensorflow.keras.layers import Activation, Embedding, ZeroPadding2D, Lambda
from tensorflow.compat.v1.keras.layers import BatchNormalization
from tensorflow.keras.layers import LeakyReLU
from tensorflow.keras.layers import UpSampling2D, Conv2D
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.utils import to_categorical
import tensorflow.keras.backend as K

@LiuZhipeng99
Copy link

I also tried this tutorial and found that it would help to add parameters here
model.add(BatchNormalization(momentum=0.8))

@thusinh1969
Copy link

Use Spectral Normalization on top of CONV2D of Discriminator will stabilize the training greatly. Also, pay attention to kernel_initializer (glorot_normal etc...)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants