You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to set-up Triton Server for my models. So far everything worked well.
My model uses TF2, it was loaded and answers on my request as expected.
I use this docker image to run models: nvcr.io/nvidia/tritonserver:24.03-tf2-python-py3
But the problem is that response with contents in output returns only when I use HTTP interface.
When I try to use GRPC, contents is always null.
Hi @aohorodnyk , could you please share the command that you run for GRPC interface? Besides, a minimal reproducer would be really helpful for us to investigate this issue.
@krishung5 to use code I use Golang, but for testing it reproducible through Postman GRPC integration.
JSON that I provided as a request, it's an actual request I use from Postman.
Golang code is more complicated, but the result is the same, so I do not see any reasons to write code for that, if simple GUI tool is available.
Could you please describe, how can I help you to reproduce the issue?
Description
Hello,
I'm trying to set-up Triton Server for my models. So far everything worked well.
My model uses TF2, it was loaded and answers on my request as expected.
I use this docker image to run models:
nvcr.io/nvidia/tritonserver:24.03-tf2-python-py3
But the problem is that response with contents in output returns only when I use HTTP interface.
When I try to use GRPC, contents is always null.
My request:
And the response is in GRPC:
But HTTP interface returns in data expected response:
I've checked /v2/models/my_model/versions/1/stats endpoint where I clearly see that every GRPC infer request increases success numbers for my model.
It looks like the issue is somewhere in GRPC interface.
For protocol I use this GRPC interface: https://github.com/triton-inference-server/common/tree/main/protobuf
Could you please help me to figure out how to fix the issue to return expected response from my model(s)?
Triton Information
I use this docker image:
nvcr.io/nvidia/tritonserver:24.03-tf2-python-py3
To Reproduce
Config:
Expected behavior
This part of the response in GRPC
"contents": null
will contain something like:The text was updated successfully, but these errors were encountered: