modes/export/ #7933
Replies: 34 comments 120 replies
-
Where can we find working examples of a tf.js exported model? |
Beta Was this translation helpful? Give feedback.
-
How to use exported engine file for inference of images in a directory? |
Beta Was this translation helpful? Give feedback.
-
I trained a custom model taking yolov8n.pt (backbone) and I want to do a model registry in MLFLOW of the model in the .engine format. It's possible directly without the export step? Someone deal with something similar? Tks for your help! |
Beta Was this translation helpful? Give feedback.
-
Hi, I appreciate the really awesome work within Ultralytics. I have a simple question. What is the difference between |
Beta Was this translation helpful? Give feedback.
-
Hello @pderrenger Can you plz help me out with how can i use Paddlepaddle Format to extract the text from the images? Your response is very imp to me i am waiting for your reply. |
Beta Was this translation helpful? Give feedback.
-
my code from ultralytics import YOLO model = YOLO('yolov8n_web_model/yolov8n.pt') # load an official model model = YOLO('/path_to_model/best.pt') i got an error ERROR: The trace log is below.
What you should do instead is wrap
ERROR: input_onnx_file_path: /home/ubuntu/Python/runs/detect/train155/weights/best.onnx TensorFlow SavedModel: export failure ❌ 7.4s: SavedModel file does not exist at: /home/ubuntu/Python/runs/detect/train155/weights/best_saved_model/{saved_model.pbtxt|saved_model.pb} what is wrong and what i need to do for fix? thanks a lot |
Beta Was this translation helpful? Give feedback.
-
Hello! the error I get is "TypeError: Model.export() takes 1 positional argument but 2 were given" |
Beta Was this translation helpful? Give feedback.
-
Are there any examples of getting the output of a pose estimator model in C++ using a torchscript file. I'm getting an output of shape (1, 56, 8400) for an input of size (1, 3, 640, 640) with two people in the sample picture. How should I interpret/post-process this output? |
Beta Was this translation helpful? Give feedback.
-
I trained a yolov5 detection model a little while ago and have successfully converted that model to tensorflowjs. That tfjs model works as expected in code only slightly modified from the example available at https://github.com/zldrobit/tfjs-yolov5-example. My version of the relevant section:
I have now trained a yolov8 detection model on very similar data. The comments in https://github.com/ultralytics/ultralytics/blob/main/ultralytics/engine/exporter.py#L45-L49 However, that does not seem to be the case. The v5 model output is the 4 length array of tensors (which is why the destructuring assignment works), but the v8 model output is a single tensor of shape [1, X, 8400] thus the example code results in an error complaining that the model result is non-iterable when attempting to destructure. From what I understand, the [1, X, 8400] is the expected output shape of the v8 model. Is further processing of the v8 model required, or did I do something wrong during the pt -> tfjs export? |
Beta Was this translation helpful? Give feedback.
-
I was wondering if anyone could help me with this code: I exported my custom trained yolov8n.pt model to .onnx but now my code is not working(model.export(format='onnx', int8=True, dynamic=True)). I am having trouble using the outputs after running inference. My Code: def load_image(image_path):
def draw_bounding_boxes(image, detections, confidence_threshold=0.5): def main(model_path, image_path):
if name == "main": Error: |
Beta Was this translation helpful? Give feedback.
-
"batch_size" is not in arguments as previous versions? |
Beta Was this translation helpful? Give feedback.
-
I converted the model I trained with costum data to tflite format. Before converting, I set the int8 argument to true. But when I examined the tflite format from the netron website, I saw that the input information is still float32. Is this normal or is there a bug? Also thank you very much for answering every question without getting bored. |
Beta Was this translation helpful? Give feedback.
-
!yolo export model=/content/drive/MyDrive/best-1-1.pt format=tflite export failure ❌ 33.0s: generic_type: cannot initialize type "StatusCode": an object with that name is already defined |
Beta Was this translation helpful? Give feedback.
-
Hi I havr tried all TFLITE export formats to convert the best.pt to .tflite but non is working. I have also checked my runtime and all the latest imports pip install -U ultralytics, and I have also tried the code you gave to someone in the comments but the issue is not resolvig Step 1: Export to TensorFlow SavedModel!yolo export model='/content/drive/MyDrive/best-1-1.pt' format=saved_model Step 2: Convert the exported SavedModel to TensorFlow Liteimport tensorflow as tf Save the TFLite modelwith open('/content/drive/MyDrive/yolov8_model.tflite', 'wb') as f: but the same error comes back. |
Beta Was this translation helpful? Give feedback.
-
can we export sam/mobile sam model to tensorRT or onnx? |
Beta Was this translation helpful? Give feedback.
-
I have trained my model and have downloaded it now i want to deploy my model in an webinterface for detecting object in live video feed can you provide me with the webinterface code |
Beta Was this translation helpful? Give feedback.
-
Hi, I have an issue with using a custom yolov8 model "best.pt" since i was using yolov3 but i wanted to upgrade to yolov8 the following code is how i used the old model import cv2
import numpy as np
from tracker import Tracker
import cvzone
import os
from datetime import datetime
from pynput import mouse
def get_coords(x, y):
print("Now at: {}".format((x, y)))
# Start the mouse listener
with mouse.Listener(on_move=get_coords) as listener:
# Load Yolo
net = cv2.dnn.readNet("yolov3_training_last.weights", "yolov3_testing.cfg")
# Name custom object
classes = ["car"]
# Initialize the video capture
cap = cv2.VideoCapture(r"C:\Users\aliye\OneDrive\Pictures\Saved Pictures\IMG_8189.MOV") # Replace 'your_video.mp4' with your video file path
# Create a background subtractor
fgbg = cv2.createBackgroundSubtractorMOG2()
area1 = [(360, 300), (360, 400), (600, 400), (600, 300)]
area2 = [(360, 175), (360, 275), (600, 275), (600, 175)]
tracker = Tracker()
a1 = {}
counter = []
def save_full_frame(frame):
# Create a folder with the current date and time as the folder name
current_datetime = datetime.now().strftime("%Y%m%d%H%M%S")
folder_name = f"wrongway"
os.makedirs(folder_name, exist_ok=True)
# Save the entire frame
image_filename = os.path.join(folder_name, f"frame_{current_datetime}.jpg")
cv2.imwrite(image_filename, frame)
# Initialize variables for frame delay and pause state
delay = 1 # Adjust the delay value (in milliseconds) to control the playback speed
is_paused = False
layer_names = net.getLayerNames()
output_layers = [layer_names[i - 1] for i in net.getUnconnectedOutLayers()]
colors = np.random.uniform(0, 255, size=(len(classes), 3))
while True:
if not is_paused:
ret, frame = cap.read()
if not ret:
break
frame = cv2.resize(frame, (1020, 500))
height, width, channels = frame.shape # Move this line here
# Apply background subtraction
fgmask = fgbg.apply(frame)
# Threshold the foreground mask
_, thresh = cv2.threshold(fgmask, 250, 255, cv2.THRESH_BINARY)
# Find contours in the thresholded image
contours, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Draw rectangles around moving objects
list = []
for contour in contours:
if cv2.contourArea(contour) > 1000: # Adjust the area threshold as needed
x, y, w, h = cv2.boundingRect(contour)
list.append([x, y, w, h])
bbox_idx = tracker.update(list)
for bbox in bbox_idx:
x1, y1, w1, h1, id = bbox
cx = int(x1 + x1 + w1) // 2
cy = int(y1 + y1 + h1) // 2
result = cv2.pointPolygonTest(np.array(area1, np.int32), ((cx, cy)), False)
if result >= 0:
a1[id] = (cx, cy)
if id in a1:
result1 = cv2.pointPolygonTest(np.array(area2, np.int32), ((cx, cy)), False)
if result1 >= 0:
cv2.rectangle(frame, (x1, y1), (x1 + w1, y1 + h1), (0, 255, 0), 2)
cv2.circle(frame, (cx, cy), 6, (0, 255, 0), -1)
cvzone.putTextRect(frame, f'{id}', (x1, y1), 1, 1)
if counter.count(id) == 0:
counter.append(id)
save_full_frame(frame)
# Object detection with YOLO
blob = cv2.dnn.blobFromImage(frame, 0.00392, (416, 416), (0, 0, 0), True, crop=False)
net.setInput(blob)
outs = net.forward(output_layers)
# Showing informations on the screen
class_ids = []
confidences = []
boxes = []
for out in outs:
for detection in out:
scores = detection[5:]
class_id = np.argmax(scores)
confidence = scores[class_id]
if confidence > 0.3:
center_x = int(detection[0] * width)
center_y = int(detection[1] * height)
w = int(detection[2] * width)
h = int(detection[3] * height)
# Rectangle coordinates
x = int(center_x - w / 2)
y = int(center_y - h / 2)
boxes.append([x, y, w, h])
confidences.append(float(confidence))
class_ids.append(class_id)
indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.4)
font = cv2.FONT_HERSHEY_PLAIN
for i in range(len(boxes)):
if i in indexes:
x, y, w, h = boxes[i]
label = str(classes[class_ids[i]])
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
cv2.putText(frame, label, (x, y - 7), font, 1, (0, 255, 0), 1)
cv2.polylines(frame, [np.array(area1, np.int32)], True, (0, 255, 0), 2)
cv2.polylines(frame, [np.array(area2, np.int32)], True, (0, 0, 255), 2)
p = len(counter)
cvzone.putTextRect(frame, f'WrongsideVehicle:-{p}', (50, 60), 2, 2)
cv2.imshow('Motion Detection', frame)
key = cv2.waitKey(delay) & 0xFF
if key == 27: # Press 'Esc' to exit
break
elif key == ord(' '): # Press spacebar to pause/resume
is_paused = not is_paused
# Release the video capture and close all OpenCV windows
cap.release()
cv2.destroyAllWindows so the line I have a .cfg file but for yolov8 i only have my weight and i tried many things but I cant figure out how to use the new model with that code. sorry since I am new in this |
Beta Was this translation helpful? Give feedback.
-
Exporting your default 'yolov8s-cls.pt' (not just small, any size to be exact), yields different results completely. Does TensorFlow models do different post-processing then the same model after export to (lets say) ONNX? and to OpenVino? Given a bus image, I am given 0.67 confidence it's a trolleybus, while the exported has given me 0.51 confidence.. What am I doing wrong? post processing? export variables? or the results are bound to change and nothing can be done? Thanks. |
Beta Was this translation helpful? Give feedback.
-
is there a way to export the model in tensorRT format from .pt file in the varying precision's: INT8, FP16, FP32, and full precision FP64 ?? |
Beta Was this translation helpful? Give feedback.
-
are these warnings normal during the export of the yolov8n model (tflite, int8): |
Beta Was this translation helpful? Give feedback.
-
i've trained my model and its detecting pretty well in python, so now i want to use Embedded C with ML Libraries or an alternative to imbed my code into a stm32. I'm using a realsense 3d camera, and this is my python code: import cv2 Load YOLOv8 modelmodel = YOLO("best.pt") Configure RealSense pipelinepipeline = rs.pipeline() Start streamingpipeline.start(config) try:
finally: how do i convert it to TensorFlow/ tensorflow lite |
Beta Was this translation helpful? Give feedback.
-
Why wasn't NMS added to the tflite export? I believe this is a very common requirement. |
Beta Was this translation helpful? Give feedback.
-
i need to make make infrence with best.pt on jetson nano board so after i exported it to .engine format in my pc and i transfered it to jetson nano got error says exported with different version that 8.0.1.6 how can I export it to tensorrt version=8.0.1.6 note that my tensorrt version in my pc is 10.0.0b6 |
Beta Was this translation helpful? Give feedback.
-
Hi, |
Beta Was this translation helpful? Give feedback.
-
Greetings members. I have tried changing my trained pytorch model to tensorflow lite model but am getting many errors, some of them relating to parameters used. |
Beta Was this translation helpful? Give feedback.
-
Okay then. Thank you very much.
Let me first try this.
Kind regards
Hisham Imran Ssengendo.
…On Tue, 14 May 2024 at 00:21, Glenn Jocher ***@***.***> wrote:
Hello! It looks like you're having some issues exporting your
custom-trained PyTorch model to TensorFlow Lite for object detection.
Here’s a simplified step you can follow to perform this conversion:
1.
*Export your PyTorch model to ONNX format* — Start by exporting your
PyTorch model to ONNX, which acts as an intermediary format:
from ultralytics import YOLO
# Load your custom-trained modelmodel = YOLO('path/to/your/custom_model.pt')
# Export to ONNXmodel.export(format='onnx', imgsz=(640, 640)) # adjust 'imgsz' as needed
2.
*Convert ONNX model to TensorFlow Lite* — Next, use tools like ONNX
TensorFlow <https://github.com/onnx/onnx-tensorflow> to convert the
ONNX model to TensorFlow, and then TensorFlow Lite Converter
<https://www.tensorflow.org/lite/convert> to convert the TensorFlow
model to TFLite. Unfortunately, direct conversion tools from ONNX to TFLite
might not handle all operations supported in YOLO models directly, so
intermediate conversion to TensorFlow is often required.
For details on arguments for export and further optimizations like
quantization which might help you in reducing the model size while speeding
up the inference especially on edge devices, please refer to our Export
documentation <https://docs.ultralytics.com/modes/export/>.
Remember, getting the conversion right might require tweaking certain
parameters based on specific layers and operations used in your model. If
you still face any issues, could you please share the specific errors
you're encountering? That would help in providing more targeted assistance!
😊👍
—
Reply to this email directly, view it on GitHub
<#7933 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AWWZRMYY4E2KAOWRAU7P4KDZCEVFPAVCNFSM6AAAAABCTE3SQOVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TIMRWGE2TA>
.
You are receiving this because you commented.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Hello, I am a rookie. I tried to replace the YOLOv8 backbone with the convnextv2 network and tried to convert the modified generated PT model file into a tflite format model file through export.py. However, only onnx could be generated, and an error occurred when generating tflite. |
Beta Was this translation helpful? Give feedback.
-
in onnx if i set simplify = true, how it works, whether it reduces any parameters such as mAP, Presision, Recall of the model |
Beta Was this translation helpful? Give feedback.
-
When I tried to run yolov8-seg.onnx with the batching option activated, this error appeared ..
To reproduceTo Produce
Script
UrgencyIt's somewhat urgent ONNX Runtime InstallationBuilt from Source ONNX Runtime Version or Commit ID1.16.3 Execution Provider'webgpu' (WebGPU) |
Beta Was this translation helpful? Give feedback.
-
#!/usr/bin/env python3 -- coding:utf-8 --Copyright (c) Megvii, Inc. and its affiliates.from ultralytics import YOLO 加载模型model = YOLO('/media/arl/26f8381e-6627-4df4-91cf-e794bc76e569/hwk/train(yolov8n-seg).pt') # 加载自定义训练模型 导出模型model.export(format='engine') (yolov8) arl@arl-ARL:/media/arl/26f8381e-6627-4df4-91cf-e794bc76e569/hwk$ python3.9 engine_model.py torch.cuda.is_available(): False But I had a problem with the cuda not working and didn't find a solution.Is there a solution? |
Beta Was this translation helpful? Give feedback.
-
modes/export/
Step-by-step guide on exporting your YOLOv8 models to various format like ONNX, TensorRT, CoreML and more for deployment. Explore now!.
https://docs.ultralytics.com/modes/export/
Beta Was this translation helpful? Give feedback.
All reactions