Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model Management #7228

Open
N-Kingsley opened this issue May 16, 2024 · 1 comment
Open

Model Management #7228

N-Kingsley opened this issue May 16, 2024 · 1 comment

Comments

@N-Kingsley
Copy link

Can I specify a specific version to load or upload when using triton-inference-server for model management?

I only found the following two APIs:
Load model: v2/repository/models/{model-name}/load
Upload model: v2/repository/models/{model-name}/upload

@juanma9613
Copy link

juanma9613 commented May 16, 2024

@N-Kingsley , I'm not a mantainer, but the way to address this at least for the /load endpoint is by adding the model versions you require in your config.pbtxt version policy https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/user_guide/model_configuration.html#version-policy. For the unload use case, not sure how this should be handled.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants