DataRobot MLOps (Machine Learning Operations) facilitates the routing of machine learning models to production and provides all the needed deployment, governance, and monitoring functionalities.
Using DataRobot MLOps, you can deploy DataRobot models into their own Kubernetes clusters (cloud/on-premise) using Portable Prediction Servers (PPSs). A PPS is a Docker container that contains a DataRobot model with a monitoring agent, and can be deployed by container orchestration tools such as Kubernetes. In doing so, DataRobot customers still have the advantages of all the model monitoring provided by DataRobot’s model monitoring platform, such as service health, data drift, etc.
When deploying multiple PPSs in the same Kubernetes cluster, you often want to have a single IP address as the entry point to all of the PPSs. A typical approach to this is path-based routing, which can be achieved using different Kubernetes Ingress Controllers. Some of the existing approaches to this include Traefik, HAProxy, Ambassador, and NGINX.
This tutorial describes how to use the NGINX Ingress controller for path-based routing to a few PPSs deployed on Amazon EKS.
There are some prerequisites to interacting with AWS and the underlying services. If any (or all) of these tools are already installed and configured for you, you can skip the corresponding steps. The detailed instructions for each step can be found here.
We assume that you have already created and locally tested some PPS containers for different DataRobot AutoML models, and pushed them to Amazon Elastic Container Registry (ECR). The detailed step-by-step procedure can be found in this other community tutorial.
The first PPS (housing prices) contains an eXtreme Gradient Boosted Trees Regressor (Gamma Loss) model. The second PPS (image binary classification - hot dog not hot dog), contains a SqueezeNet Image Pretrained Featurizer + Keras Slim Residual Neural Network Classifier using Training Schedule model.
The latter model has been trained using DataRobot Visual AI functionality.
With the Docker images stored in ECR, you can spin up an Amazon EKS cluster. The EKS cluster needs a VPC with either:
Amazon EKS requires subnets in at least two Availability Zones. A VPC with public and private subnets is recommended so that Kubernetes can create public load balancers in the public subnets that load-balance traffic to pods running on nodes that are in private subnets.
eksctl create cluster \ --name multi-app \ --vpc-private-subnets=subnet-XXXXXXX,subnet-XXXXXXX \ --vpc-public-subnets=subnet-XXXXXXX,subnet-XXXXXXX \ --nodegroup-name standard-workers \ --node-type t3.medium \ --nodes 2 \ --nodes-min 1 \ --nodes-max 3 \ --ssh-access \ --ssh-public-key my-public-key.pub \ --managed
Note: Using the --managed parameter enables Amazon EKS-managed nodegroups. This feature automates the provisioning and lifecycle management of nodes (EC2 instances) for Amazon EKS clusters. You can provision optimized groups of nodes for the clusters; EKS will keep their nodes up-to-date with the latest Kubernetes and host OS versions. The eksctl tool makes it possible to choose the specific size and instance type family via command line flags or config files.
Note: Although --ssh-public-key is optional, it is highly recommended that you specify it when you create your node group with a cluster. This option enables SSH access to the nodes in your managed node group. Enabling SSH access lets you connect to your instances and gather diagnostic information if there are issues; you cannot enable remote access after the node group is created.
This command will finish as follows:
Cluster provisioning usually takes between 10 and 15 minutes. When your cluster is ready, test that your kubectl configuration is correct:
kubectl get svc
We’d like to recall here that AWS Elastic Load Balancing supports three types of load balancers: Application Load Balancers (ALB), Network Load Balancers (NLB), and Classic Load Balancers (CLB). The details can be found here.
The NGINX Ingress controller uses NLB on AWS. NLB is best suited for load balancing of TCP, UDP, and TLS traffic where extreme performance is required. Operating at the connection level (Layer 4 of the OSI model), NLB routes traffic to targets within Amazon VPC and is capable of handling millions of requests per second while maintaining ultra-low latencies. NLB is also optimized to handle sudden and volatile traffic patterns.
Deploy the NGINX Ingress controller (this manifest file also launches the NLB):
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/deploy.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: house-regression-deployment namespace: aws-tlb-namespace labels: app: house-regression-app spec: replicas: 3 selector: matchLabels: app: house-regression-app template: metadata: labels: app: house-regression-app spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - amd64 containers: - name: house-regression-model image: <your_image_in_ECR> ports: - containerPort: 80
apiVersion: v1 kind: Service metadata: name: house-regression-service namespace: aws-tlb-namespace labels: app: house-regression-app spec: selector: app: house-regression-app ports: - protocol: TCP port: 8080 targetPort: 8080 type: NodePort
kubectl apply -f house-regression-deployment.yaml
kubectl apply -f house-regression-service.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: hot-dog-deployment namespace: aws-tlb-namespace labels: app: hot-dog-app spec: replicas: 3 selector: matchLabels: app: hot-dog-app template: metadata: labels: app: hot-dog-app spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - amd64 containers: - name: hot-dog-model image: <your_image_in_ECR> ports: - containerPort: 80
apiVersion: v1 kind: Service metadata: name: hot-dog-service namespace: aws-tlb-namespace labels: app: hot-dog-app spec: selector: app: hot-dog-app ports: - protocol: TCP port: 8080 targetPort: 8080 type: NodePort
Create a Kubernetes service and deployment:
kubectl apply -f hot-dog-deployment.yaml
kubectl apply -f hot-dog-service.yaml
View all resources that exist in the aws-tlb-namespace:
kubectl get all -n aws-tlb-namespace
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: nginx-redirect-ingress namespace: aws-tlb-namespace annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: /$2 labels: app: nginx-redirect-ingress spec: rules: - http: paths: - path: /house-regression(/|$)(.*) backend: serviceName: house-regression-service servicePort: 8080 - path: /hot-dog(/|$)(.*) backend: serviceName: hot-dog-service servicePort: 8080
Note: The "nginx.ingress.kubernetes.io/rewrite-target" annotation rewrites the URL before forwarding the request to the backend pods. As a result, the paths /house-regression/some-house-path and /hot-dog/some-dog-path transform to /some-house-path and /some-dog-path, respectively.
Create Ingress for path-based routing:
kubectl apply -f nginx-redirect-ingress.yaml
Verify that Ingress has been successfully created:
kubectl get ingress/nginx-redirect-ingress -n aws-tlb-namespace
(Optional) Use the following if you want to access the detailed output about this ingress:
kubectl describe ingress/nginx-redirect-ingress -n aws-tlb-namespace
Note ADDRESS for the next two scoring requests.
Score the house-regression model:
curl -X POST http://<ADDRESS>/house-regression/predictions -H "Content-Type: text/csv" --data-binary @kaggle_house_test_dataset_10.csv
Score the hot-dog model:
Note: for_pred.csv is a CSV file containing one column with the header: the content for that column is a Base64 encoded image.
curl -X POST http://<ADDRESS>/hot-dog/predictions -H "Content-Type: text/csv; charset=UTF-8" --data-binary @for_pred.csv
Original picture for prediction (downloaded from here).
Output (yes, it’s predicted as hot_dog)
Original picture for prediction (downloaded from here).
Output (right, it would be strange to name it hot_dog)
kubectl delete namespace aws-tlb-namespace
kubectl delete namespace ingress-nginx
The deployment of a few Kubernetes services behind the same IP address allows you to minimize the number of load balancers needed and facilitate the maintenance of the applications. Applying Kubernetes Ingress Controllers makes it possible.
This tutorial explained how to develop the path-based routing to a few Portable Prediction Servers (PPSs) deployed on the Amazon EKS platform. This solution has been implemented via NGINX Ingress Controller.