Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

number of Bbox support limit in yolov8! #12711

Open
2 tasks done
MehrnazFani opened this issue May 15, 2024 · 5 comments
Open
2 tasks done

number of Bbox support limit in yolov8! #12711

MehrnazFani opened this issue May 15, 2024 · 5 comments
Labels
bug Something isn't working non-reproducible Bug is not reproducible

Comments

@MehrnazFani
Copy link

Search before asking

  • I have searched the YOLOv8 issues and found no similar bug report.

YOLOv8 Component

No response

Bug

With training yolov8s_obb, I get cuda out of memory, although my batch_size=1. It happens because I have training images with more than 100 overlapping bboxes in them. If I remove those images and their corresponding bboxs from my dataset, the issue will be resolved. Why this is happning?

Environment

No response

Minimal Reproducible Example

No response

Additional

No response

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!
@MehrnazFani MehrnazFani added the bug Something isn't working label May 15, 2024
Copy link

👋 Hello @MehrnazFani, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.

Install

Pip install the ultralytics package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.

pip install ultralytics

Environments

YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

Ultralytics CI

If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

@glenn-jocher
Copy link
Member

@MehrnazFani hi there! It looks like you're encountering a CUDA out of memory error due to handling many overlapping bounding boxes in your images. Here’s a quick rundown of why this might be happening:

  1. High GPU Memory Usage: Each bounding box computation requires a certain amount of GPU memory. Having a lot of overlapping bounding boxes can increase the memory requirement significantly, even if your batch size is set to 1.

  2. Complexity of Bboxes: Overlapping bounding boxes can lead to complex loss calculations and more memory being allocated to manage the overlaps during training.

To resolve this:

  • Reduce the complexity or number of bounding boxes if possible, as you've already noticed improvements by doing so.
  • Increase your GPU memory, if upgrading hardware is an option, to accommodate larger datasets with more complex annotations.
  • Optimize memory usage: Try using half precision (float16) during training to reduce memory consumption, which can be done by setting half=True in your training command if the YOLOv8 implementation supports it.

Here's how you might use half precision:

model.train(data='data.yaml', imgsz=640, batch=1, epochs=100, half=True)

Good luck with your further training, and thanks for offering to help with a PR! 😊

@MehrnazFani
Copy link
Author

MehrnazFani commented May 16, 2024 via email

@MehrnazFani
Copy link
Author

I have access to more powerful hardware. That solved my problem for now. half=True didnot help!

@glenn-jocher
Copy link
Member

@MehrnazFani hi Mehrnaz,

Great to hear that upgrading your hardware resolved the issue! It's useful to know that half=True didn't make a difference in your case. This feedback helps us understand the performance under different setups. If you encounter any more questions or need further assistance, feel free to reach out.

@glenn-jocher glenn-jocher added the non-reproducible Bug is not reproducible label May 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working non-reproducible Bug is not reproducible
Projects
None yet
Development

No branches or pull requests

2 participants