Using Scoring Code Models with AWS Sagemaker

cancel
Showing results for 
Search instead for 
Did you mean: 

Using Scoring Code Models with AWS Sagemaker

This article shows how to make predictions using DataRobot’s Scoring Code deployed on AWS SageMaker. Scoring Code allows you to download machine learning models as a JAR file which can then be deployed in the environment of your choice. In this example, we are deploying a scoring code model that is used for predicting whether a loan will default or not, but you can customize the model based on your use case to achieve the same results.

Overview

AWS Sagemaker allows you to bring in your machine learning models and expose them as API endpoints.

DataRobot can export models in Java and Python.

Once exported, you can deploy the model on AWS SageMaker and use the Sagemaker endpoint to deploy the model. For this article, we are focused on the DataRobot scoring code export, which provides a Java JAR file.

The models that support scoring code export are indicated by the Scoring Code icon.

fig1. scorecode-icon.png

Why deploy on AWS Sagemaker

While DataRobot provides its own scalable prediction servers that are fully integrated with the platform, there are multiple reasons why someone would want to deploy on AWS Sagemaker:

  • Company policy or governance decision.
  • Custom functionality on top of the DataRobot model.
  • Low-latency scoring without the API call overhead. Java code is typically faster than scoring through the Python API.
  • The ability to integrate models into systems that can’t necessarily communicate with the DataRobot API.

In addition, there are also some drawbacks:

  • No data drift and accuracy tracking out-of-the-box unless MLOps agents are configured. (If you do want to monitor and manage the deployed model using the MLOps agent, check out this article.)
  • Additional time overhead resulting from deploying to AWS Sagemaker.

All in all, it is up to you and your use case to decide where you would want your model to be deployed; DataRobot supports many integration options.

Scoring Code Download

The first step to deploying a DataRobot model to AWS Sagemaker is to download the scoring code JAR file. Select the model from the Leaderboard, then select Predict > Downloads (Figure 1). Be sure to choose the compiled binary in the dropdown.

Figure 1. Scoring Code DownloadFigure 1. Scoring Code Download

Uploading Scoring Code to AWS S3 Bucket

Once you have downloaded the scoring code JAR file, you will need to upload your Codegen JAR file to an AWS S3 bucket from where Sagemaker can access it.

Sagemaker expects the archive (tar.gz format) to be uploaded in S3 bucket so we will compress our model as tar.gz archive using below command:

tar -czvf 5e8471fa169e846a096d5137.jar.tar.gz 5e8471fa169e846a096d5137.jar

If you are using Mac OS, use the below command as the Mac OS adds hidden files in the tar.gz package which will create a problem during deployment.

COPYFILE_DISABLE=1 tar -czvf 5e8471fa169e846a096d5137.jar.tar.gz 5e8471fa169e846a096d5137.jar

Once you have created the tar.gz archive, upload it to s3 bucket (Figure 2;)

Figure 2. S3 BucketFigure 2. S3 Bucket

Publish Docker image to Amazon ECR

Next, we need to publish a Docker image containing inference code to the Amazon ECR. In this example we will download the DataRobot-provided Docker image with following command:

docker pull datarobot/scoring-inference-code-sagemaker:latest

To publish this image to Amazon ECR:

  1. Authenticate your Docker client to the Amazon ECR registry to which you intend to push your image. Authentication tokens must be obtained for each registry used, and the tokens are valid for 12 hours. You can refer to Amazon documentation for various authentication options listed out here.
  2. In this example we used token-based authentication:

    TOKEN=$(aws ecr get-authorization-token --output text --query 'authorizationData[].authorizationToken')

    curl -i -H "Authorization: Basic $TOKEN" https://xxxxxxx.dkr.ecr.us-east-1.amazonaws.com/v2/sagemakertest/tags/list

  3. Then, create an Amazon ECR Registry where you can push your image:
    aws ecr create-repository --repository-name sagemakerdemo
    which will give you the output shown in Figure 3:

    Figure 3. ECR repository creationFigure 3. ECR repository creation

You can also create the repository from the AWS Management console
ECR Service → Create Repository (provide repository name)

Figure 4. AWS Management Console - Repository Creation PageFigure 4. AWS Management Console - Repository Creation Page

The steps are as follows:
  1. Identify the image to push. Run the docker images command to list the images on your system.
  2. Tag your image you would like to push to AWS ECR.
  3. xxxxxxxx is the image id of DataRobot-provided Docker Image that contains inference code (scoring-inference-code-sagemaker:latest) we downloaded from Dockerhub.
  4. Tag your image with the Amazon ECR registry, repository, and optional image tag name combination to use. The registry format is aws_account_id.dkr.ecr.region.amazonaws.com. The repository name should match the repository that you created for your image. If you omit the image tag, then DataRobot assumes the tag is the latest.

    docker tag xxxxxxxx "${account}.dkr.ecr.${region}.amazonaws.com/sagemakerdemo"

  5. Push the image using the docker push command:

    docker push ${account}.dkr.ecr.${region}.amazonaws.com/sagemakermlopsdockerized

    Once the image is pushed, you can validate from AWS management Console.

    Figure 5. Repositories pageFigure 5. Repositories page

Create the Model

Now it’s time to create the actual model.

  1. Sign into AWS and enter SageMaker into the search bar. Select the first result--Amazon Sagemaker--to enter the SageMaker console and create a model.

    Figure 6. Create model pageFigure 6. Create model page
  2. In the IAM role field, select Create a new role from the dropdown if you do not have an existing role on your account. This option creates a role with the required permissions and assigns it to your instance.

    Figure 7. Container optionsFigure 7. Container options

  3. For the Container input options field (1), select Provide model artifacts and inference image location. Specify the location of the Scoring Code image (your model) in the S3 bucket (2) and the registry path to the Docker image containing the inference code (3).

    Click Add container below the fields when complete.

    Figure 8. Add container buttonFigure 8. Add container button

    Finally your model configurations will look like this:

    Figure 9. Finalized modelFigure 9. Finalized model

Creating an Endpoint configuration

To set up the endpoint for predictions:

  1. Open the dashboard on the left side and navigate to the Endpoint configurations page to create a new endpoint configuration. Select the model you have uploaded.
    Figure 10. Create Endpoint pageFigure 10. Create Endpoint page
  2. Name the endpoint configuration (1) and provide an encryption key if desired (2). When complete, select Create endpoint configuration at the bottom of the page.
  3. Use the dashboard to navigate to Endpoints and create a new endpoint:
    Figure 11. Endpoint ConfigurationFigure 11. Endpoint Configuration
  4. Name the endpoint (1) and opt to use an existing endpoint configuration (2). Select the configuration you just created (3). When finished, click Select endpoint configuration.

When the endpoint creation completes, you are able to make prediction requests with your model.

Once Endpoint is ready to service requests the status will change to InService:

Figure 12. Endpoint status pageFigure 12. Endpoint status page

Making Inferences

Once the Sagemaker endpoint status changes to InService we can start making predictions against this endpoint. Here is a link to sample code for doing that.

Let’s test the endpoint from the command line first to make sure the endpoint is responding. This is a binary classification problem for predicting whether or not a loan will default.

We will use below command to make a test prediction and pass the data in the body of the CSV string:

aws sagemaker-runtime invoke-endpoint --endpoint-name mlops-dockerized-endpoint-new

To run the above commands, make sure you have installed AWS CLI.

Monitoring the deployed model

Now that your model is deployed, you can monitor and manage is using the MLOps agent, as explained here.

More Information

Check out these community articles:

And if you’re a licensed DataRobot customer, search the in-app Platform Documentation for Scoring Code.

Labels (2)
Version history
Last update:
‎07-29-2020 11:34 AM
Updated by:
Contributors