Model Outlier Detection

When ML models are deployed in production, it is important to monitor the data that the model runs inference on. Changes in data can adversely affect the performance of ML models and hence it is important to track these outlier instances of data.

Here we will :

  • Launch an image classifier model trained on the CIFAR10 dataset. The data instances contains 32x32x3 pixels images that are classified into 10 classes including truck, frog, cat etc

  • Setup an VAE outlier detector for this particular model

  • Send a request to get an image classification

  • Send a perturbed request to get a positive outlier detection

Launch a Seldon ML pipeline

Create an image classifier model deployment in an appropriate namespace

  1. Click Create new deployment on the deployments page to create a Seldon ML Pipeline.

  2. Enter the pipeline details in the deployment creation wizard and click Next:

    • Name: cifar10

    • Type: Seldon ML Pipeline

  3. The predictor details should have the Tensorflow runtime to use the correct server and the following Model URI:

  4. Click Next for the remaining steps, then click Launch. create_model

Add an Outlier detector

From the deployment overview page, select your pipeline to enter the pipeline dashboard. Inside the pipeline dashboard, add an outlier detector by clicking the Add button within the Outlier Detection widget.


Enter the following parameters in the modal popup which appears to configure the detector:

  • Detector Name: cifar10-outlier.

  • Storage URI: (For public google buckets, secret field is optional)

  • Reply URL: (By default, the Reply URL is set as seldon-request-logger in the logger’s default namespace. If you are using a custom installation, please change this parameter according to your installation.)


Then, click Create Detector to complete the setup.

Make Predictions

Run a single prediction using the expected instance of the frog image. Click the payload to download it from the following table. Also a perturbed image of the frog in the same format is available in the following table. Make a couple of these requests at random using the predict tool in the UI.

Payload type


Tensorflow Payload

Expected Instance


Download Frog Payload

Outlier Instance


Download Perturbed Frog Payload


View outliers from historical requests

Go to the Requests screen to view all the historical requests. You can see the outlier score on each instance. Also you can highlight outliers based on this score.


Monitor outlier instances on a timeline

Under the Monitor section you can see a timeline of outlier requests.



If you experience issues with this demo, see the troubleshooting docs or Elasticsearch sections.