Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Whether to use a lr scheduler when training from the scratch? #95

Open
bad-meets-joke opened this issue Feb 29, 2024 · 1 comment
Open

Comments

@bad-meets-joke
Copy link

Hi,

If we choose to train this model from the scratch on a custom dataset, should we warmup the learning rate in the begining and decrease it slowly?

I could not see the lr scheduler in the class Palette(BaseModel)

Hope to get your reply.

@TMDTom
Copy link

TMDTom commented Mar 1, 2024

Hello, you can solve your problem like this. Write code in class Palette(BaseModel):
self.optG = torch.optim.Adam(list(filter(lambda p: p.requires_grad, self.netG.parameters())), **optimizers[0])
self.optimizers.append(self.optG)

   ******************* 
    from torch.optim.lr_scheduler import LambdaLR
    self.scheduler = LambdaLR(self.optG, lr_lambda=lambda epoch: 1.0 if epoch < 100 else 1.0 - (
                        0.5 / (self.opt['train']['n_epoch'] - 100)) * (epoch + 1 - 100))
    self.schedulers.append(self.scheduler)

Then in the train_step write like this:
for scheduler in self.schedulers:
scheduler.step()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants