-
-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inference speed is None after transferring results to CPU #12723
Comments
👋 Hello @Rick-v-E, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered. If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it. If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results. Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users. InstallPip install the pip install ultralytics EnvironmentsYOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit. |
Hi there! It looks like the issue you're encountering with the To retain the speed information after moving your results to CPU, you can simply copy the speed dictionary before calling from ultralytics import YOLO
path_to_weights = ...
path_to_image = ...
model = YOLO(path_to_weights)
results = model.predict(path_to_image)[0]
# Store speed data before moving to CPU
speed_data = results.speed.copy()
results = results.cpu()
# Restoring the speed data after the transfer
results.speed = speed_data
print(results.speed) This modification ensures that the inference speed data is preserved and accessible after the results have been transferred to the CPU. Hope this helps! 😊 |
Hi, thanks for your answer! :) That' s indeed what I am doing now and works fine, however, I was a bit surprised that the I initially understood that calling the |
Hello! I'm glad to hear that the solution worked for you! 😊 Indeed, the If you think this feature would be beneficial, you may consider opening a feature request on GitHub or if you're inclined to contribute, submitting a pull request to include this functionality. The community contributions help in making the tool more robust and user-friendly! Thank you for your feedback and suggestions! 🚀 |
Hello everyone, Check out my new GitHub repository for running YOLOv8 object detection and segmentation inference using OpenVINO and NumPy only. This implementation is faster than the Torch version, offering improved performance and efficiency. Visit the repository here: Faster Inference YOLOv8. Your feedback and contributions are welcome! Thanks |
Hello! Great work on creating a repository that enhances YOLOv8 inference with OpenVINO and NumPy! 🚀 It's exciting to see community-driven efforts that push for performance improvements. I'll definitely check it out and encourage others to do the same. Keep up the fantastic work, and I look forward to seeing how your project evolves with community feedback and contributions. Thanks for sharing! 😊 |
Search before asking
YOLOv8 Component
Predict
Bug
I use the
speed
attribute to check the inference time of the network. However, I noticed that the speeds are becomingNone
after transferring the detection results to the CPU.Environment
Ultralytics YOLOv8.2.5 🚀 Python-3.10.13 torch-2.1.1+cu121 CUDA:0 (NVIDIA GeForce RTX 3090, 24257MiB)
Setup complete ✅ (32 CPUs, 62.7 GB RAM, 345.5/915.3 GB disk)
OS Linux-6.5.0-28-generic-x86_64-with-glibc2.35
Environment Linux
Python 3.10.13
Install pip
RAM 62.71 GB
CPU AMD Ryzen 9 5950X 16-Core Processor
CUDA 12.1
matplotlib ✅ 3.8.2>=3.3.0
opencv-python ✅ 4.8.1.78>=4.6.0
pillow ✅ 10.1.0>=7.1.2
pyyaml ✅ 6.0>=5.3.1
requests ✅ 2.31.0>=2.23.0
scipy ✅ 1.11.4>=1.4.1
torch ✅ 2.1.1>=1.8.0
torchvision ✅ 0.16.1>=0.9.0
tqdm ✅ 4.66.1>=4.64.0
psutil ✅ 5.9.1
py-cpuinfo ✅ 9.0.0
thop ✅ 0.1.1-2209072238>=0.1.1
pandas ✅ 1.4.2>=1.1.4
seaborn ✅ 0.13.0>=0.11.0
Minimal Reproducible Example
This results in:
Additional
No response
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: