Skip to content

One cycle policy learning rate scheduler in PyTorch

License

Notifications You must be signed in to change notification settings

dkumazaw/onecyclelr

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

One cycle policy learning rate scheduler

A PyTorch implementation of one cycle policy proposed in Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates.

Usage

The implementation has an interface similar to other common learning rate schedulers.

from onecyclelr import OneCycleLR

optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
scheduler = OneCycleLR(optimizer, num_steps=num_steps, lr_range=(0.1, 1.))
for epoch in range(epochs):
    for step, X in enumerate(train_dataloader):
        train(...) 
        scheduler.step()

About

One cycle policy learning rate scheduler in PyTorch

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages