Challenger Models

cancel
Showing results for 
Search instead for 
Did you mean: 

Challenger Models

Almost certainly, your deployed model will degrade over time. Inevitably, as the data used to train your model looks increasingly different from the incoming prediction data, prediction quality declines and becomes less reliable. Challenger models provide a framework to compare alternative models to the current production model. You can submit challenger models that shadow a deployed model, and then replay predictions already made to analyze the performance of each. This allows you to compare the predictions made by the challenger models to the currently deployed model (also called the “champion” model) to determine if there is a superior DataRobot model that would be a better fit.

Figure 1. Deployment ChallengersFigure 1. Deployment Challengers

To support Challenger models for a deployment, you enable the Challengers option with prediction row storage. To do so, adjust the deployment’s data drift settings either when creating a deployment (from the Data Drift tab) or from the Settings > Data tab for an existing deployment. Prediction row storage instructs DataRobot to store prediction request data at the row level for deployments. DataRobot will use these predictions to compare the champion and challenger models.

Figure 2. Enable challenger modelsFigure 2. Enable challenger models

Before adding a challenger model to a deployment, you first build and select the model to be added as a challenger. You can choose a model from the Leaderboard, or you can use your own custom model deployed within MLOps. In either case, the challenger models are referenced as model packages from the Model Registry. The challenger models must have the same target type as the champion. They don’t have to have the same features but they do need to in order to be able to replay the historical predictions.

When you have selected a model to serve as a challenger, from the Leaderboard navigate to Predict > Deploy and select Add to Model Registry. This creates a model package for the selected model in the Model Registry, which enables you to add the model to a deployment as a challenger.

Figure 3. Deploy model - add to model registryFigure 3. Deploy model - add to model registry

Now navigate to the deployment for the champion model, select Challengers, and click Add challenger model to add any challenger model. Choose the model you want from the Model Registry and click Select model package.

Figure 4. Add challenger modelFigure 4. Add challenger model

Figure 5. Select challenger model from model registryFigure 5. Select challenger model from model registry

The champion model is listed along with all challenger models, including:

  • The name of the model.
  • Metadata for each model such as the project name, model name, and the execution environment type.
  • The training data.
  • And an actions menu to replace or delete the model. 
Figure 6. Selected champion and challenger modelsFigure 6. Selected champion and challenger models

After adding challenger models, you can replay stored predictions made with the champion model. This allows you to compare performance metrics such as predicted values, accuracy, and data errors across each model. To replay predictions, select Update challenger predictions

Figure 7. Replay predictionsFigure 7. Replay predictions

The prediction requests made within the time range specified by the date slider will be replayed for the challengers. After predictions are made, click Refresh on the time range selector to view an updated display of performance metrics for the models.  

Figure 8. Refreshed display of performance metricsFigure 8. Refreshed display of performance metrics

You can also replay predictions on a periodic schedule instead of doing so manually. Navigate to a deployment’s Settings > Challengers tab. Turn on the toggle to Automatically replay challengers, and set when you want to replay on predictions (such as every hour or every Sunday at 18:00, etc.).

Figure 9. Set schedule for replaying predictionsFigure 9. Set schedule for replaying predictions

Once you have replayed the predictions, you can analyze and compare the results. The Predictions chart (under the Challengers tab) records the average predicted value of the target for each model over time. Hover over a point to compare the average value for each model at the specific point in time. For binary classification projects, use the Class dropdown to select the class for which you want to analyze the average predicted values. The chart also includes a toggle that allows you to switch between continuous and binary modes. 

Figure 10. Predictions chart for champion and challengerFigure 10. Predictions chart for champion and challenger

The Accuracy chart records the change in a selected accuracy metric value (LogLoss, in this example) over time. These metrics are identical to those used for the evaluation of the model before deployment. Use the dropdown to change accuracy metrics.

Figure 11. Accuracy chart for champion and challengerFigure 11. Accuracy chart for champion and challenger

The Data Errors chart records the data error rate for each model over time. Data error rate measures the percentage of requests that result in an HTTP error (i.e., problems with the prediction request submission).

For more information on the MLOps suite of tools, visit the DataRobot Community for a variety of additional videos, articles, webinars and more, starting here.

More Information

If you’re a licensed DataRobot customer, search the in-app Platform Documentation for Challengers tab.

Labels (3)
Version history
Revision #:
8 of 8
Last update:
‎10-27-2020 04:30 PM
Updated by:
 
Contributors