You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since jetson supports triton inference server, I am considering applying it.
So, I have a few questions.
In an environment where multiple AI models are run in Jetson, is there any advantage to using Triton Inference Server compared to running them individually with TensorRT? (Triton Inference Server's Queuing optimization vs. GRPC communication latency added in LocalHost)
In the case of serving multiple models, Triton provides the benefit on serving those models concurrently, and you can configure the models separately depending on your use case. Triton also provides support on popular machine learning frameworks if your models are not just in TensorRT. Another benefit is that, to serve your model in TRT directly, you will need to write additional code to interact with the APIs, which Triton already does it for you, so it should require less effort to deploy a model through Triton.
System shared memory is for accessing CPU memory between processes and CUDA shared memory is for GPU memory. Usually you want the data to be stored closer to the device of the model, so you would explore the CUDA shared memory if your model is deployed on GPU.
Since jetson supports triton inference server, I am considering applying it.
So, I have a few questions.
In an environment where multiple AI models are run in Jetson, is there any advantage to using Triton Inference Server compared to running them individually with TensorRT? (Triton Inference Server's Queuing optimization vs. GRPC communication latency added in LocalHost)
It appears that there are system shared memory and cuda shared memory as a way to reduce LocalHost communication latency. What is the difference between the two? (The document in the link talks about the same function, https://docs.nvidia.com/deeplearning/triton-inference-server/archives/triton_inference_server_1140/user-guide/docs/client_example.html)
System shared memory has been confirmed to work, but cuda shared memory produces an error like the link above.
Does Jetson currently support cuda shared memory? (Failed to register CUDA shared mem: failed to open CUDA IPC handle: invalid resource handle #5798)
The text was updated successfully, but these errors were encountered: