Deployments with MLOPs Getting Started Guide

cancel
Showing results for 
Search instead for 
Did you mean: 

Deployments with MLOPs Getting Started Guide

Deployment is a critical component for gaining real value from a model, yet unfortunately it’s also where a lot of models get stuck. DataRobot offers a comprehensive set of solutions around model deployment as well as model monitoring, management, and governance through MLOps. These solutions work well with the different personas involved in MLOps and are agnostic to your current platform and choice of model-development environment.  

With MLOPs you can deploy models using different solutions

  • Easily deploy and interact with a DataRobot-built model using the REST API
  • Deploy a model built with tools like Python or R using DataRobot Custom Inference Models
  • Export a DataRobot-built model as a container.
  • Export a DataRobot-built model as scoring code.
  • Deploy a remote agent for an external model.

image_mlops.png

And you can monitor, manage, and govern all of these models in one place:

  • Monitor any of these models for service health, data drift, and accuracy. 
  • Set up custom Notifications that tell you when your deployment needs your attention. 
  • Manage and Replace your models easily while keeping a documented record of every change that occurs.  
  • Establish Governance roles and processes for each deployment.

Overview

To get the big picture of MLOps, see these resources which show the entire product from end-to-end:      

Deploying Models 

Before deploying the model, you make the model package in the model registry

Then you have the model you want, you just need to deploy it. You can do this all through one system, regardless of how you created the model. DataRobot gives you three options for deployment: using REST APIs, using Custom Inference Models, or using Portable Prediction Servers (PPSs) with MLOps agents. 

Deploy a DataRobot-built model with the REST API:

Real-time deployments
Batch deployments

Deploy a model built with tools like Python or R using the DataRobot Custom Inference Models solution:

Export a DataRobot-built model as a container using a Portable Prediction Server (PPS) and then monitor it with MLOPs Agents on cloud-provider Kubernetes services:

Exporting Models

Using DataRobot for model development, it is possible to export your model as scoring code.  DataRobot offers two approaches for this: Scoring Code and DataRobot Prime.  

Scoring Code
DataRobot Prime

Deploy a remote agent for monitoring external models (either custom models or DataRobot exported scoring code):

Monitoring Models

When you deploy models using MLOps, you can then track them all from one place. MLOps is a one-stop command center for all your models.

Overview of Monitoring models
Specific changes to monitor

Managing and Governing your Deployment 

MLOps simplifies replacing an existing model. And, governance features ensure you can easily track and manage deployments within your enterprise.

Integrations

When you want to deploy models or just use model predictions in another application, DataRobot provides integrations with many other technologies.

AWS Athena
Snowflake Integrations
Azure DB/SQL Integrations
Tableau Integrations

Webinars

This article provides an overview of some of the deployment/MLOps resources. There are also a number of MLOps-related webinars listed in our index of learning sessions.

Scripts and Code

You can find code examples with R and Python on our Community Github.  Below are some of the more relevant repos for MLOPs processes.

What’s New

DataRobot Release 6.2 includes loads of enhancements and new functionality for MLOps.

Help us!

We're trying to keep this list up-to-date with all the helpful DataRobot MLOps content. If you find something that isn't on this list, please Comment here and let us know!

Labels (3)
Version history
Revision #:
12 of 12
Last update:
3 weeks ago
Updated by:
 
Contributors