NVIDIA Triton Server
- Deploy an image classification model on NVIDIA Triton with GPUs
For this example choose
cifar10 as the name and use the KFServing protocol option.
For the model to run we have created several image classification models from the CIFAR10 dataset.
- Tensorflow Resnet32 model:
- ONNX model:
- PyTorch Torchscript model:
Choose one of these and select Triton as the server. Customize the model name to that of the name of the model saved in the bucket for Triton to load.
Next, on the resources screen add 1 GPU request/limit assuming you have these available on your cluster and ensure your have provided enough memory for the model. To determine these settings we recommend you use the NVIDIA model analyzer.
When ready you can test with images. The payload will depend on the model from above you launched.