We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can I specify a specific version to load or upload when using triton-inference-server for model management?
I only found the following two APIs: Load model: v2/repository/models/{model-name}/load Upload model: v2/repository/models/{model-name}/upload
The text was updated successfully, but these errors were encountered:
@N-Kingsley , I'm not a mantainer, but the way to address this at least for the /load endpoint is by adding the model versions you require in your config.pbtxt version policy https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/user_guide/model_configuration.html#version-policy. For the unload use case, not sure how this should be handled.
Sorry, something went wrong.
No branches or pull requests
Can I specify a specific version to load or upload when using triton-inference-server for model management?
I only found the following two APIs:
Load model: v2/repository/models/{model-name}/load
Upload model: v2/repository/models/{model-name}/upload
The text was updated successfully, but these errors were encountered: