Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

failure of TensorRT 8503 when running enqueueV2 on GPU 3080 (C++) #3874

Closed
monajalal opened this issue May 16, 2024 · 0 comments
Closed

failure of TensorRT 8503 when running enqueueV2 on GPU 3080 (C++) #3874

monajalal opened this issue May 16, 2024 · 0 comments

Comments

@monajalal
Copy link

monajalal commented May 16, 2024

for context->enqueueV2(buffers, cuda_stream, nullptr);

I get this error:

[E] [TRT] 3: [executionContext.cpp::enqueueInternal::629] Error Code 3: API Usage Error (Parameter check failed at: runtime/api/executionContext.cpp::enqueueInternal::629, condition: bindings[x] || nullBindingOK

I got the pth model from NVIDIA TAO Docker and then converted it to onnx also using the NVIDIA TAO. Then, I converted it to Engine file using my C++ code (cannot share). I had no problem for YOLOX conversion.

Now when I use enqueueV2 I get this error. Could you please help in fixing/debugging this error?

This is the version of my TRT TensorRT v8503 and

$ uname -a
Linux DOS 6.5.0-28-generic #29~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Apr  4 14:39:20 UTC 2 x86_64 x86_64 x86_64 GNU/Linux

$ lsb_release -a
LSB Version:	core-11.1.0ubuntu4-noarch:security-11.1.0ubuntu4-noarch
Distributor ID:	Ubuntu
Description:	Ubuntu 22.04.4 LTS
Release:	22.04
Codename:	jammy

$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Thu_Nov_18_09:45:30_PST_2021
Cuda compilation tools, release 11.5, V11.5.119
Build cuda_11.5.r11.5/compiler.30672275_0

$ nvidia-smi
Thu May 16 15:00:18 2024       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.67                 Driver Version: 550.67         CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3080 ...    Off |   00000000:01:00.0  On |                  N/A |
| N/A   52C    P8             17W /   90W |      50MiB /  16384MiB |     11%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A      6130      G   /usr/lib/xorg/Xorg                             45MiB |
+-----------------------------------------------------------------------------------------+

Description

Environment

TensorRT Version:

NVIDIA GPU:

NVIDIA Driver Version:

CUDA Version:

CUDNN Version:

Operating System:

Python Version (if applicable):

Tensorflow Version (if applicable):

PyTorch Version (if applicable):

Baremetal or Container (if so, version):

Relevant Files

Model link:

Steps To Reproduce

Commands or scripts:

Have you tried the latest release?:

Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (polygraphy run <model.onnx> --onnxrt):

@monajalal monajalal changed the title XXX failure of TensorRT 8503 when running enqueueV2 on GPU 3080 (C++) failure of TensorRT 8503 when running enqueueV2 on GPU 3080 (C++) May 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant