Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update trainer.py #10160

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open

Conversation

JacekMaksymiuk
Copy link

@JacekMaksymiuk JacekMaksymiuk commented Apr 18, 2024

As it stands, the scheduler is not a function with values from 1 to lrf, but instead is a function with values from 1 to lrf + a small epsilon resulting from the fact that for the last epoch x / self.epochs < 1 (e.g. for 300 epochs, for the last epoch it is 299/300)

I have read the CLA Document and I sign the CLA

🛠️ PR Summary

Made with ❤️ by Ultralytics Actions

🌟 Summary

Enhanced Learning Rate Strategies in Model Training 🚀

📊 Key Changes

  • Adjusted the cosine learning rate function to end one epoch earlier.
  • Modified the linear learning rate schedule to similarly conclude its adjustment one epoch before the total training duration.

🎯 Purpose & Impact

  • Purpose: These changes make the learning rate adjustments more precise and efficient by ensuring they complete their course just before the final training epoch. This can lead to better stabilization of the learning rate towards the end of the training.
  • Impact: Users may notice improved model performance due to more optimal learning rate scheduling. This can result in better accuracy or faster convergence in their models, especially in the final phases of training. 📈

As it stands, the scheduler is not a function with values from 1 to lrf, but instead is a function with values from 1 to lrf + a small epsilon resulting from the fact that for the last epoch x / self.epochs < 1 (e.g. for 300 epochs, for the last epoch it is 299/300)
Copy link

github-actions bot commented Apr 18, 2024

CLA Assistant Lite bot:
Thank you for your submission, we really appreciate it. Like many open-source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution. You can sign the CLA by just posting a Pull Request Comment same as the below format.


I have read the CLA Document and I sign the CLA


1 out of 2 committers have signed the CLA.
✅ (glenn-jocher)[https://github.com/glenn-jocher]
@JacekMaksymiuk
You can retrigger this bot by commenting recheck in this Pull Request

Copy link

codecov bot commented Apr 18, 2024

Codecov Report

Attention: Patch coverage is 0% with 2 lines in your changes are missing coverage. Please review.

Project coverage is 36.03%. Comparing base (0ad0139) to head (ee5660d).

Files Patch % Lines
ultralytics/engine/trainer.py 0.00% 2 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##             main   #10160       +/-   ##
===========================================
- Coverage   75.99%   36.03%   -39.97%     
===========================================
  Files         121      121               
  Lines       15351    15351               
===========================================
- Hits        11666     5531     -6135     
- Misses       3685     9820     +6135     
Flag Coverage Δ
Benchmarks 36.03% <0.00%> (ø)
GPU ?
Tests ?

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@glenn-jocher
Copy link
Member

@JacekMaksymiuk please submit a new comment on this PR to sign the CLA with this line:

I have read the CLA Document and I sign the CLA

@JacekMaksymiuk
Copy link
Author

@glenn-jocher I have added this information to the comment.
I have read the CLA Document and I sign the CLA

@glenn-jocher
Copy link
Member

recheck

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants