rt-detr with custom dataset training cannot convert to int8. #12698
Replies: 3 comments 2 replies
-
@harufumigithub it looks like the int8 quantization process failed because it couldn't utilize a CUDA device, as indicated by the error: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected. This suggests that either the environment you're running the export command in doesn't have access to a CUDA GPU or the setup is not correctly configured to utilize the GPU. For solving this issue, please ensure the following:
If you are running this on a system without a GPU, you will need to switch to a machine with a CUDA-capable GPU to perform int8 quantization, as it is heavily dependent on the GPU for the computation required. Here's a quick command to check if TensorFlow is detecting your GPU: import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU'))) If it returns |
Beta Was this translation helpful? Give feedback.
-
GPU seems available. $ yolo export model=runs/detect/train/weights/best.pt format=tflite int8=true PyTorch: starting from 'runs/detect/train/weights/best.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 300, 8) (72.9 MB) TensorFlow SavedModel: starting export with tensorflow 2.13.1... ONNX: starting export with onnx 1.16.0 opset 17... Automatic generation of each OP name started ======================================== Model loaded ======================================================================== Model conversion started ============================================================ |
Beta Was this translation helpful? Give feedback.
-
Thank you for quick response. Here are my environment. OS Linux-6.5.0-28-generic-x86_64-with-glibc2.35 matplotlib ✅ 3.8.4>=3.3.0 |
Beta Was this translation helpful? Give feedback.
-
I have trained a custom dataset with following:
model = RTDETR('yolov8n-rtdetr.yaml')
model.load('yolov8n.pt')
results = model.train(data="./tag_detection.yaml", device=0, imgsz=640, epochs=1000, batch=8)
After successfully trained and running inference works. I tried to convert best.pt to tflite as follows:
'yolo export model=best.pt format=tflite int8=True'
However conversion goes successful up to generating float16.tflite but failed to generate int8.tflite as follows:
Ultralytics YOLOv8.2.2 🚀 Python-3.9.13 torch-2.2.1 CPU (11th Gen Intel Core(TM) i7-11700B 3.20GHz)
YOLOv8n-rtdetr summary: 304 layers, 9483256 parameters, 0 gradients, 16.7 GFLOPs
PyTorch: starting from 'runs/detect/train/weights/best.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 300, 8) (72.9 MB)
TensorFlow SavedModel: starting export with tensorflow 2.13.1...
ONNX: starting export with onnx 1.15.0 opset 17...
ONNX: simplifying with onnxsim 0.4.36...
ONNX: export success ✅ 5.1s, saved as 'runs/detect/train/weights/best.onnx' (36.6 MB)
TensorFlow SavedModel: starting TFLite export with onnx2tf 1.17.5...
Automatic generation of each OP name started ========================================
Automatic generation of each OP name complete!
Model loaded ========================================================================
Model conversion started ============================================================
WARNING: The optimization process for shape estimation is skipped because it contains OPs that cannot be inferred by the standard onnxruntime.
WARNING: module 'onnx' has no attribute '_serialize'
2024-05-14 22:05:30.677030: E tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:268] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
saved_model output started ==========================================================
saved_model output complete!
Float32 tflite output complete!
Float16 tflite output complete!
Input signature information for quantization
signature_name: serving_default
input_name.0: images shape: (1, 640, 640, 3) dtype: <dtype: 'float32'>
./run.sh: line 12: 8661 Segmentation fault (core dumped) yolo export model=runs/detect/train/weights/best.pt format=tflite int8=True
Beta Was this translation helpful? Give feedback.
All reactions