Deploy in SageMaker and Monitor with MLOps Agents

cancel
Showing results for 
Search instead for 
Did you mean: 

Deploy in SageMaker and Monitor with MLOps Agents

(Updated June, 2021)

This article showcases how to make predictions and monitor external models deployed on AWS SageMaker using DataRobot’s Scoring Code and MLOps agents.

Overview

Scoring Code

DataRobot automatically runs code generation for those models that support it, and indicates code availability with an icon on the Leaderboard. This option allows you to download validated Java Scoring Code for a predictive model without approximation; the code is easily deployable in any environment and is not dependent on the DataRobot application.

Why deploy on AWS SageMaker

While DataRobot provides its own scalable prediction servers that are fully integrated with the platform, there are multiple reasons why someone would want to deploy on AWS SageMaker:

  • Company policy or governance decision.
  • Custom functionality on top of the DataRobot model.
  • Low-latency scoring without the overhead of API calls. Java code is typically faster than scoring through the Python API.
  • The ability to integrate models into systems that can’t necessarily communicate with the DataRobot API.

Obviously, there are also some drawbacks:

  • No data drift and accuracy tracking out-of-the-box unless MLOps agents are configured.
  • Time required for deploying to AWS Sagemaker.

All in all, it’s up to you and your use case to decide where you would want your model to be deployed: DataRobot supports many integrations!

MLOps Agent

AWS is one of the biggest cloud providers, and you can leverage AWS SageMaker as a deployment environment for your Scoring Code. AWS SageMaker allows you to bring in your machine learning models (in several supported formats) and expose them as API endpoints. DataRobot will package the MLOps agent along with the model in a Docker container which will be deployed on AWS SageMaker.

Figure 1. MLOps agent architectureFigure 1. MLOps agent architecture

In this example we are deploying a Scoring Code model for predicting whether or not a loan will default. It’s a standard binary classification example we use for demos at DataRobot.

Note: As an alternative to this process, you can use PPS (PPS, or Portable Prediction Server, is a DataRobot execution environment for DataRobot model packages) and Codegen with embedded agents. For help and more information about other supported processes, see the “Deployments with MLOPs Getting Started Guide” community article.

Scoring Code Download

The first step to deploying a DataRobot model to AWS Sagemaker is to download the Scoring Code JAR file. This can be found under the Downloads tab from within the model menu (Figure 1). Be sure to choose the compiled binary from the dropdown.

Figure 2. Scoring Code downloadFigure 2. Scoring Code download

How MLOps Agents work

MLOps Library

The MLOps library provides a way for you to get the same monitoring features with your own models as you can with DataRobot models. The MLOps library provides an interface that you can use to report metrics to DataRobots MLOps service; from there, you can monitor deployment statistics and predictions, track feature drift, and get other insights to analyze model performance.
You can use the MLOps library with any type of model, including Scoring Code models downloaded from DataRobot. Currently, we support versions of the MLOps library in Java or Python (both Python2 and Python3).

The MLOps agent can be downloaded using the DataRobot API https://app.datarobot.com/api/v2/mlopsInstaller, or as a tarball from the DataRobot UI.
From the UI select your user icon and navigate to the Developer Tools page. You’ll find the tarball there for download.

lhaviland_0-1623114500125.pnglhaviland_1-1623114515720.png
  1. Install the DataRobot MLOps agent and libraries.
  2. Configure the agent.
  3. Start the agent service.
  4. Ensure that the agent buffer directory (MLOPS_SPOOLER_DIR_PATH in the config file) exists.
  5. Configure the channel you want to use for reporting metrics.  (Note that the MLOps agent can be configured to work with a number of channels, including SQS, Google Pub Sub, Spool File and RabbitMQ. We’re using SQS for this article.)
  6. Use the MLOps library to report metrics from your deployment.

The MLOps library buffers the metrics locally, which enables high throughput without slowing down the deployment.

The MLOps agent forwards the metrics to the MLOps service. Now you can monitor model performance via the DataRobot MLOps user interface. 

Create a Deployment

Helper scripts for creating deployments are available in the examples directories of the MLOps agent tarball. Every example has its own script to create the related deployment, and the tools/create_deployment.py script is available to create your own deployment.

Deployment creation scripts interact with the MLOps service directly and so must be run on a machine with connectivity to the MLOps service.

Every example has a description file (name_deployment_info) and a script to create a deployment.

  1. Edit the description file to configure for your deployment.
  2. If you want to enable/disable feature drift tracking, configure the description file by adding/excluding trainingDataset field.
  3. Create a new deployment by running the script,name_create_deployment.sh.

This will return a deployment ID and initial model ID that can be used to instrument your deployment.
The deployment can also be created from the DataRobot GUI. 

Figure 3. Model RegistryFigure 3. Model Registry

To create a deployment from the DataRobot GUI, use the the following steps:

  1. Log in to the DataRobot GUI.
  2. Select Model Registry (1) and click Add New Package (2).
  3. In the dropdown, select New external model package (3). The page shown in Figure 4 appears.
  4. Complete all the information needed for your deployment, and then click Create package.


    Figure 4. External Model PackageFigure 4. External Model Package

    When the package is created, the page shown in Figure 5 appears.

    Figure 5. Created packageFigure 5. Created package
  5. Select the Deployments tab and click Deploy Model Package, validate the details on this page, and click Create deployment (top, right-hand side of the page).
  6. You can use toggle buttons to enable drift tracking and segment analysis of predictions.

    Figure 6. Settings menuFigure 6. Settings menu

Once you click Create Deployment after filling necessary details, you will see the below dialog.

Figure 7. Created DeploymentFigure 7. Created Deployment

You can see the deployment details of the newly created deployment.

Figure 8. Deployment overviewFigure 8. Deployment overview

If you select the Integrations tab for the deployment, you can see the monitoring code.

Figure 9. Monitoring CodeFigure 9. Monitoring Code

When you scroll down through this monitoring code, you can see DEPLOYMENT_ID and MODEL_ID which are used by the MLOps library to monitor the specific model deployment.

Figure 10. Deployment and Model IDFigure 10. Deployment and Model ID

Upload Scoring Code to AWS S3 bucket

After you have downloaded the Scoring Code JAR file, you upload that file to an AWS S3 bucket that is accessible to SageMaker.

SageMaker expects tar.gz archive format to be uploaded in S3 bucket, hence we will compress our model (the Scoring Code JAR file) as a tar.gz archive using the below command:

tar -czvf 5e8471fa169e846a096d5137.jar.tar.gz 5e8471fa169e846a096d5137.jar

Note: If you are using macOS it adds hidden files in the tar.gz package that create problems during deployment; use the below command instead:

COPYFILE_DISABLE=1 tar -czvf 5e8471fa169e846a096d5137.jar.tar.gz 5e8471fa169e846a096d5137.jar

Once you have created the tar.gz archive, upload it to S3 bucket.

Figure 11. S3 bucket overviewFigure 11. S3 bucket overview

Customize Docker image to add MLOps Agent

DataRobot has a published Docker image (scoring-inference-code-SageMaker:latest) which contains the inference code to the Amazon ECR. We will use this Docker image as the base image, and then we will add our customized Docker container layer containing the MLOps agent.

Figure 12. Scoring CodeFigure 12. Scoring Code

The shell script agent-entrypoint.sh will run the Scoring Code as a JAR file and also start the MLOps agent JAR.

Figure 13. Shell Script agentFigure 13. Shell Script agent

The MLOps configuration file is configured, by default, to report metrics in Amazon SQS service, so we just need to provide the URL for accessing SQS in the mlops.agent.conf.yaml:

- type: SQS_SPOOL
details: {name: "sqsSpool", queueUrl: "https://sqs.us-east-1.amazonaws.com/123456789000/mlops-agent-sqs "}

Now create a Docker image from Dockerfile. Go to the directory containing Dockerfile and run the following command:

docker build -t codegen-mlops-SageMaker

Figure 14. Creating Docker imageFigure 14. Creating Docker image

This creates a Docker image from Dockerfile (reference Dockerfile is shared with source code).

Publish a Docker Image to Amazon ECR

The next step is to publish the Docker image we just created to the Amazon ECR.

To publish this image to Amazon ECR, do the following:

  1. Authenticate your Docker client to the Amazon ECR registry to which you intend to push your image. Authentication tokens must be obtained for each registry used, and the tokens are valid for 12 hours. You can refer to Amazon documentation for various authentication options listed out here.
  2. In this example, we use Token-based authentication:

    TOKEN=$(aws ecr get-authorization-token --output text --query 'authorizationData[].authorizationToken')

    curl -i -H "Authorization: Basic $TOKEN" https://123456789000.dkr.ecr.us-east-1.amazonaws.com/v2/SageMakertest/tags/list
  3. You need to create an Amazon ECR Registry where you can push your image:
    aws ecr create-repository --repository-name SageMakerdemo

    Which will give you output as shown below:
    Figure 15. Amazon ECR registry outputFigure 15. Amazon ECR registry output
    You can also create it from the AWS Management console, from ECR ServiceCreate Repository (then provide the repository name).

    Figure 16. Create repository pageFigure 16. Create repository page

  4. Identify the image to push. Run the docker images command to list the images on your system: docker image ls

  5. Tag the image you would like to push to AWS ECR. 

    • 3b7dee0391a8 is the image ID of the Docker Image that we just created and contains inference code and the MLOps agent.

  6. Tag the image with the Amazon ECR registry, repository, and optional image tag name combination to use. The registry format is aws_account_id.dkr.ecr.region.amazonaws.com. The repository name should match the repository that you created for your image. If you omit the image tag, DataRobot uses the latest tag:

    docker tag 3b7dee0391a8 "${account}.dkr.ecr.${region}.amazonaws.com/SageMakerdemo"

    Figure 17. Image tagFigure 17. Image tag

  7. Push the image using the docker push command:

    docker push ${account}.dkr.ecr.${region}.amazonaws.com/SageMakermlopsdockerized

Figure 18. Docker push resultsFigure 18. Docker push results

Figure 19. More Docker push resultsFigure 19. More Docker push results

Once the image is pushed, you can validate from AWS management Console.

Figure 20. AWS Management ConsoleFigure 20. AWS Management Console

Create Model

  1. Sign into AWS and enter “SageMaker” into the search bar. Select the first result (“Amazon Sagemaker”) to enter the SageMaker console and create a model.

    Figure 21. Create model pageFigure 21. Create model page
  2. In the IAM role field, select Create a new role from the dropdown if you do not have an existing role on your account. This option creates a role with the required permissions and assigns it to your instance.

    Figure 22. Creating ContainerFigure 22. Creating Container

  3. For the Container input options field (1), select Provide model artifacts and inference image location. Specify the location of the Scoring Code image (your model) in the S3 bucket (2) and the registry path to the Docker image containing the inference code (3).

  4. Click Add container below the fields when complete.

    Figure 23. Adding ContainerFigure 23. Adding Container

    Finally, your model configuration will look like this:

    Figure 24. Container overviewFigure 24. Container overview

  5. Open the dashboard on the left side and navigate to the Endpoint configurations page to create a new endpoint configuration. Select the model you have uploaded.

    Figure 25. Endpoint Configuration (1)Figure 25. Endpoint Configuration (1)

  6. Name the endpoint configuration (1) and provide an encryption key, if desired (2). When complete, select Create endpoint configuration (at the bottom of the page).

  7. Use the dashboard to navigate to Endpoints and create a new endpoint:
    Figure 26. Endpoint Configuration (2)Figure 26. Endpoint Configuration (2)

  8. Name the endpoint (1) and opt to use an existing endpoint configuration (2). Select the configuration you just created (3). When complete, click Select endpoint configuration.

When endpoint creation is complete, you are able to make prediction requests with your model.

When the endpoint is ready to service requests, the Status will change to InService.

Figure 27. Endpoint checkFigure 27. Endpoint check

Making Inferences

Once the SageMaker endpoint status changes to InService we can start making predictions against this endpoint.

The dataset I have used to train this model is a standard Lending Club dataset; I will also make test predictions using the Lending Club data.

Let’s test the endpoint from the command line first to make sure the endpoint is responding. This is a binary classification problem, predicting whether the load will default or not.

We will use the below command to make test predictions and pass data in the body of a CSV string.

aws SageMaker-runtime invoke-endpoint --endpoint-name mlops-dockerized-endpoint-new

Figure 28. Making test predictionsFigure 28. Making test predictions

To run the above commands, make sure you have installed AWS CLI.

You can also use SageMakerCodegenInference.py (provided in the source code) to make inferences. This Python file uses the DataRobot MLOps library to report metrics back to the DataRobot application which we can see from the deployment we created.

Model Monitoring

We can go back to the deployment that we created and check the Service Health page to monitor the model. In this case, the MLOps Library is reporting predictions metrics to Amazon SQS channel.
The MLOps agent deployed on SageMaker along with Scoring Code reads these metrics from SQS channel and reports to the model monitoring service in DataRobot; these results are available in the Service Health tab of the deployment.

Figure 29. Service Health overviewFigure 29. Service Health overview

More Information

Check out the community article, Using Scoring Code Models with AWS Sagemaker.

If you’re a licensed DataRobot customer, search the in-app Platform Documentation for Using Scoring Code models in SageMaker or MLOps agent.

Labels (2)
Version history
Last update:
‎06-07-2021 09:33 PM
Updated by:
Contributors