Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace thop with torch.profiler for FLOPs #12634

Draft
wants to merge 16 commits into
base: main
Choose a base branch
from
Draft

Conversation

glenn-jocher
Copy link
Member

@glenn-jocher glenn-jocher commented May 12, 2024

@Burhan-Q as described replacement for thop. If this works well we can eliminate the dependency entirely.

🛠️ PR Summary

Made with ❤️ by Ultralytics Actions

🌟 Summary

Redefine how model FLOPs are calculated, enhancing compatibility and removing dependencies.

📊 Key Changes

  • Deprecated the get_flops function that relied on the thop package.
  • Introduced get_flops as a new method, leveraging the built-in torch.profiler for FLOPs calculation, aimed at removing external dependencies and enhancing the function's integration with newer PyTorch versions.
  • Adjusted the methodology to calculate FLOPs based on either stride size or actual image size, ensuring versatility across different model architectures.

🎯 Purpose & Impact

  • Purpose: Streamline and make the process of calculating a model's Floating Point Operations Per Second (FLOPs) more reliable and integrated with PyTorch ecosystems. This change aims to ensure better compatibility with future updates and reduce dependency issues.
  • Impact: Developers and users working with Ultralytics models will experience a smoother, more integrated method for FLOPs calculation. This can lead to more accurate performance assessments and optimizations without relying on third-party packages.
  • For Developers: Easier integration and maintenance with PyTorch's evolving features while reducing external dependency risks.
  • For Users: Transparent and consistent performance metrics aiding in model selection and optimization efforts.

@Burhan-Q as described replacement for thop. If this works well we can eliminate the dependency entirely.
@glenn-jocher glenn-jocher changed the title Replace thop with torch.profiler Replace thop with torch.profiler for FLOPs May 12, 2024
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Copy link

codecov bot commented May 12, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 66.60%. Comparing base (8fb1406) to head (c595873).

Additional details and impacted files
@@            Coverage Diff             @@
##             main   #12634      +/-   ##
==========================================
- Coverage   70.26%   66.60%   -3.67%     
==========================================
  Files         124      124              
  Lines       15681    15660      -21     
==========================================
- Hits        11019    10431     -588     
- Misses       4662     5229     +567     
Flag Coverage Δ
Benchmarks 35.40% <100.00%> (ø)
GPU ?
Tests 63.36% <100.00%> (-3.02%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
@glenn-jocher
Copy link
Member Author

Ok after testing this seems to be mostly working (few edge cases left, but it's MUCH SLOWER than thop. For the fastest FLOPs computations I see 0.02 thop and 0.05 torch profiler, for the very slowest (RTDETR at 640) I see 1s thop and 11s - 13s torch profiler.

This speed reduction is terrible, especially for RTDETR and possibly SAM, so I think while this was a great experiment it may have to stay an experiment for now.

@glenn-jocher glenn-jocher self-assigned this May 12, 2024
@glenn-jocher glenn-jocher marked this pull request as draft May 12, 2024 21:06
@Burhan-Q Burhan-Q added the enhancement New feature or request label May 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants