New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inference model after convert to tflite file. #12988
Comments
👋 Hello @sangyo1, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution. If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it. If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results. RequirementsPython>=3.8.0 with all requirements.txt installed including PyTorch>=1.8. To get started: git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install EnvironmentsYOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit. Introducing YOLOv8 🚀We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - YOLOv8 🚀! Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects. Check out our YOLOv8 Docs for details and get started with: pip install ultralytics |
@sangyo1 hey there! It looks like you're having trouble with indexing the outputs of your TFLite model. The error The key here is to ensure that you're accessing the correct indices for your Here's a practical step to debug this issue:
After verifying the correct indices for each output tensor, update the indices in your This should help resolve the |
Thank you @glenn-jocher
Also here is Input:
I was go over #11395 (comment) |
I have another question regarding the conversion to TFLite. I suspect the labels (classes) might have gotten mixed up. Originally, I trained the model with 6 classes, but when I run the command:
it incorrectly tags objects as motorcycles, airplanes, etc., which are not related to my training labels. |
Hey @sangyo1! It sounds like there might be a mix-up with the class labels recognition when running inference with the converted TFLite model. Here’s a quick check and a few tips:
Fixing these should align the predictions more accurately with your original training classes. |
Thank you for your response, @glenn-jocher. When I run:
everything works perfectly. However, I encounter issues with the best_fp16.tflite file where it mislabels objects. I'm curious if the command:
requires a specific label map to convert to a TFLite model. Additionally, regarding output_details mentioned earlier, how can I identify which outputs correspond to bounding boxes, labels, and confidence scores? How should I go about creating my own inference code to detect objects in images? Is it feasible to modify detect.py to craft my own script? Ultimately, I'd like to automatically save images with specific detected objects to a different directory. How can I implement this? Any tips to update this function?
How do I bring right boxes, classes, and scores with yolov5 tflite model? |
According to above question, As you can see my own inference code is way off and I reference the code from #1981 (comment) And here is my code I am not sure where did I make a mistake
|
sorry keep adding more more questions, but here is the code that I change my pre-process image function. Here is the changed preprocess_image function
|
Hey @sangyo1, No worries about the questions, happy to help! Looking at your
Here’s a streamlined version of your def preprocess_image(image_path, input_size, input_mean=127.5, input_std=127.5):
image = Image.open(image_path).convert('RGB')
image = image.resize((input_size, input_size))
image_data = np.array(image).astype(np.float32)
image_data = (image_data - input_mean) / input_std
image_data = np.expand_dims(image_data, axis=0)
return image, image_data This ensures consistency in image preparation for your model. Try running your detection with this and check if it resolves the issue with the bounding boxes! 😊 |
@glenn-jocher,
is it correct way to create boxes?
|
Hey @sangyo1! It looks like your approach to drawing the bounding boxes and preprocessing might be generally correct. However, issues could arise from how the box coordinates are being recalculated and represented. Since the YOLO model outputs coordinates in [x_center, y_center, width, height] relative to the image's dimensions, converting these to corner coordinates (xmin, ymin, xmax, ymax) should follow this mapping: xmin = int(max(1, (x - w/2) * W))
xmax = int(min(W, (x + w/2) * W))
ymin = int(max(1, (y - h/2) * H))
ymax = int(min(H, (y + h/2) * H)) Ensure the image size matches the dimension you're visualizing the outputs on, especially after resizing. A good check would be to confirm that the aspect ratio is maintained or adjust accordingly to see if the bounding boxes improve. For better troubleshooting, recheck your preprocessing and ensure that image aspect ratios are handled properly during resize operations. Correct preprocessing is often critical in ensuring model outputs align well on the input image. 😊 |
Hello @glenn-jocher
I update my detect_object based on your recommendation
since it occurred error, but above code I believe it does work same thing. however, I still get same image same bound boxes, I am not sure what did I missed. So I actually upload full code. Or as I mentioned before, is it possible to use detect.py in my inference python code? like run detect.py, and if it detect specific object, it save the image in save_directory something like that. |
Hello @sangyo1! Thanks for sharing your updated code. It seems like you've adjusted the bounding box calculations correctly. If the bounding boxes still appear off, you might want to validate:
About integrating For applying agnostic-nms with tflite models, you would typically implement it as you would with a regular model output:
We don't have out-of-box support for using I hope this helps! 😊 |
@glenn-jocher
I have a question for you. I'm trying to save images that contain specific target classes, but how can I achieve this? Previously, my code saved all images that detected any object. Now, I want to save only those images where specific objects are detected, as I've added some classes to reduce false positives. How can I configure this in the code? |
Sorry actually solve the problem above code, but I have new question that how do I set the score threshold in that code? I want to set detect anything above confidence interval = .5 |
Hello! Great to hear that you solved your previous issue! To set the confidence threshold for detections to 0.5 in YOLOv5, you can adjust the def detect_objects(model, image):
# Perform inference with model setting confidence threshold to 0.5
results = model(image, size=640, conf_thres=0.5)
return results This will ensure that your model only considers detections with a confidence score above 0.5. Happy coding! 😊 |
Hey @glenn-jocher thnk you so much that really helped. I have one last question: how can I apply detection specifically to the center of the image? For instance, I want to focus the detection on only the central 50% of the image. How can I implement this? |
@sangyo1 hello! I'm glad to hear the previous advice was helpful! To focus detection on the central 50% of an image, you can modify the image before feeding it into the model. Here’s an example of how you might crop the image in Python using OpenCV: import cv2
def crop_center(image):
h, w = image.shape[:2]
start_x = w // 4
end_x = start_x + (w // 2)
start_y = h // 4
end_y = start_y + (h // 2)
cropped_image = image[start_y:end_y, start_x:end_x]
return cropped_image
# Usage
image = cv2.imread('path_to_your_image.jpg')
cropped_image = crop_center(image)
# Now pass 'cropped_image' to the detection model This will crop to the central 50% of the image, and you can then pass this cropped portion to your YOLOv5 model for detection. Happy coding! 😊 |
Search before asking
Question
After converting the YOLOv5 model to a TFLite model using
export.py
, I am attempting to use it for object detection. However, I need to understand how to draw the bounding boxes and what the input and output formats are for this TFLite model. I'm currently facing issues with incorrect bounding box placement or errors in my object detection code. Here's the code I'm using to load the image and perform object detection, but the outcomes are not correct:FYI, this is how I converted the model
python3 export.py --weights /home/ubuntu/ssl/yolov5/runs/train/exp9/weights/best.pt --include tflite
And this is my tensor input and output
Additional
Here is the error I get
The text was updated successfully, but these errors were encountered: