Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

batch-wise Predict works but not faster. #12720

Open
1 task done
mandal4 opened this issue May 16, 2024 · 1 comment
Open
1 task done

batch-wise Predict works but not faster. #12720

mandal4 opened this issue May 16, 2024 · 1 comment
Labels
question Further information is requested

Comments

@mandal4
Copy link

mandal4 commented May 16, 2024

Search before asking

Question

I have question about Predict method in batchwise.
I see that model.predict(img_path_list) works but it dosen't faster with considerting total time cost.
What i want is to get same time with 1)model.predict(single_img) and 2)model.predict(multiple_img)
Could you tell me if i missed some point. Thx.

Additional

No response

@mandal4 mandal4 added the question Further information is requested label May 16, 2024
@glenn-jocher
Copy link
Member

Hello! Thanks for your question regarding batch-wise prediction speed using the YOLOv8 model. It sounds like you want to see similar time performance whether predicting on a single image or multiple images simultaneously.

If you're seeing slower predictions for batch mode relative to single image mode, this could be related to several factors, such as GPU utilization, I/O overhead from loading multiple images, or the efficient batching of GPU resources.

A few areas to explore:

  • Ensure your GPU is optimally utilized: Check if your GPU is being fully utilized during batch predictions. Tools like nvidia-smi can help monitor this.
  • Batch size adjustments: Consider adjusting the batch size for optimal performance based on your hardware specifications.
  • Image preprocessing: Ensure that image loading and preprocessing steps are not causing a bottleneck.

Here's a simple usage example ensuring GPU is utilized if accessible:

from ultralytics import YOLO

# Load a model
model = YOLO('yolov8n.pt', device='cuda' if torch.cuda.is_available() else 'cpu')  # Ensuring it tries to use GPU if available

# Predict multiple images
results = model.predict(['im1.jpg', 'im2.jpg'])

If you continue to experience issues, profiling your inferencing times with detailed GPU metrics may offer more insights into where the delays occur. 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants