How to run an ML.NET model with DataRobot MLOps

Showing results for 
Search instead for 
Did you mean: 

How to run an ML.NET model with DataRobot MLOps

In this tutorial, we will explore how a model that has been built with ML.NET can be deployed and monitored with DataRobot MLOps. 

ML.NET is an open source machine learning framework created by Microsoft for the .NET developer platform. If you want to learn more about it, take a look at

For this tutorial we will use the LendingClub dataset, which you can download from here: .

We want to predict the likelihood of a loan applicant to default; in machine learning this is referred to as a binary classification problem.

We could easily solve this with DataRobot AutoML, but for the purpose of this tutorial we want to create the model with ML.NET and then productionalize and monitor it with DataRobots MLOps, which allows us to monitor all of our models in one central dashboard, regardless of the source or programming language.

So before we deploy a model to DataRobot MLOps, let’s quickly create a new ML.NET model from scratch, and then create an ML.NET environment for DataRobot MLOps. 

Please note that this DataRobot MLOps ML.NET environment only has to be created once, and if you only require support for binary classification and regression models, then you can skip this step and simply use the existing “DataRobot ML.NET Drop-In” environment that can be downloaded from the DataRobot Community GitHub.

1) Create the ML.NET model


To start building .NET apps, we need to download and install the .NET SDK. To do this, you can follow the steps as outlined on Once you've installed it, open a new terminal and run the following command:


If there is no error, then we can proceed with the next step, and install the actual ML.NET framework, as shown below.

dotnet tool install -g mlnet

If the installation of the ML.NET framework was successful, we can create the actual model by following the steps below.

a.) Create your model:

mkdir DefaultMLApp
cd DefaultMLApp
dotnet new console -o consumeModelApp

mlnet auto-train --task binary-classification --dataset "10K_Lending_Club_Loans.csv" --label-col
umn-name "is_bad" --max-exploration-time 1000

b.) Evaluate your model.

After the ML.NET CLI selects the best model, it will display the experiment results, which show a summary of the exploration process, including how many models were explored in the given training time.


While the ML.NET CLI generates code for the highest performing model, it also displays up to five models with the highest accuracy that are found during the given exploration time. It displays several evaluation metrics for those top models, including AUC, AUPRC, and F1-score. (DataRobot licensed customers can search the in-app platform documentation for Guidance for using error metrics to learn more.)

c.) Test the model

The ML.NET CLI adds both the machine learning model and the projects for training and consuming the model to your solution, including:

  • A .NET console app (SampleBinaryClassification.ConsoleApp), which contains ModelBuilder.cs (used to build/train the model) and Program.cs (used to run the model).
  • A .NET Standard class library (SampleBinaryClassification.Model), which contains ModelInput.cs and ModelOutput.cs (input/output classes for model training and consumption) and (generated serialized ML model).

To try the model, you can run the console app (SampleBinaryClassification.ConsoleApp) to predict the likelihood of default for a single applicant:

cd SampleBinaryClassification.ConsoleApp
dotnet run

2) Create DataRobot MLOps environment package

While DataRobot already provides many environment templates out of the box (including R, Python, Java, PyTorch, etc., we choose ML.NET to help walk you through the process of creating your own runtime environment from start to finish.  

To make an easy-to-use, reusable environment, follow the below guidelines:

  1. Your environment package must include a Dockerfile that installs all of your dependencies.
  2. Custom models require a simple web server in order to make predictions. This can be co-located within the model package, or separated into an environment package. We recommend that you put this in a separate environment package so it can be reused for multiple models that are leveraging the same programming language.
    The web server must be listening on port 8080 and implement the following routes:
    1. GET /{URL_PREFIX}/ This route is used to check if your model's server is running
    2. POST /{URL_PREFIX}/predict/ This route is used to make predictions

The {URL_PREFIX} is passed as an environment variable to the container and needs to be handled by your web server accordingly. The data itself is expected in a multiform request. 

Request format:

Binary Classification


fhuthmacher_1-1592592957963.png fhuthmacher_2-1592592958041.png

Response format:

Binary Classification


{"predictions":[{"True": 0.0, "False": 1.0}]}

{"predictions": [12.3]}

Lastly, DataRobot MLOps runs extensive tests before deploying a custom model to ensure reliability; therefore it is important that your web server can handle missing values and return results in the expected response format as outlined above.

  1. Also the environment package needs to include an executable script, which should start the model server.
  2. Any code plus the script should be copied to /opt/code/ by your Dockerfile.

You can download the entire code for DataRobot’s MLOps environment package here.

Three things to highlight here:

  1. As mentioned above, we need to use port 8080 so that DataRobot can correctly identify the web server. Therefore, in “appsettings.json” we specify port 8080 for the Kestrel web server as shown below.

     "Kestrel": {
       "EndPoints": {
         "Http": {
           "Url": ""

  2. We initialized the model code (mlContext, mlModel, and  predEngine) in the “Startup.cs” class . We do this so that dotnet recognizes file changes whenever you create a new model package.

    // Initialize MLContext
    MLContext ctx = new MLContext();        
    //Load model
    DataViewSchema modelInputSchema;
    ITransformer mlModel = ctx.Model.Load(modelPath, out modelInputSchema);         
    // Create prediction engine & pass it to our controller
    predictionEngine  = ctx.Model.CreatePredictionEngine<ModelInput,ModelOutput>(mlModel);​

  3. The start_server.shshell script is responsible for starting the model server in the container. If we packaged model and server together, then we would only need the compiled version, and the shell script could simply run dotnet consumeModelApp.dll. But since we have model code and server environment code separated for reusability, we recompile from source at the startup of the container as shown below.
    export TMPDIR=/tmp/NuGetScratch/
    mkdir -p ${TMPDIR}
    rm -rf obj/ bin/
    dotnet clean
    dotnet build
    dotnet run
    # to ensure Docker container keeps running
    tail -f /dev/null

Before we can upload our custom environment to DataRobot MLOps, we need to compress our custom environment code to a tarball, as shown below:

tar -czvf mlnetenvironment.tar.gz -C DRMLOps_MLNET_environment/.

3) Upload DataRobot MLOps environment package

To upload the new MLOps ML.NET environment, refer to the instructions in the DataRobot in-app platform documentation, Creating a new custom inference model (see also screenshot below). 


4) Upload & Test ML.NET model within DataRobot MLOps

Once the environment is created, you create a new custom model entity, and upload the actual model ( and ModelInput.cs).

To upload the model, refer to the instructions in the DataRobot in-app platform documentation; search for Creating custom inference models and then locate information in the section, "Creating custom inference models."


Finally, once you have created the environment as well as the model within DataRobot MLOps, you can upload some test data to confirm that everything works as expected (as shown in the following screenshot).


During this phase DataRobot runs a test to determine how the model handles missing values and whether or not the internal web server adheres to the response format.

5) Make predictions with new ML.NET model in DataRobot MLOps

Once all the tests are complete, you can deploy the custom model using the settings shown below.


When this last step is complete, you can make predictions with your new custom model just like with any other DataRobot model (see also Postman collection).


Final thoughts

Even though we built the model outside of DataRobot with ML.NET, we can utilize it like any other DataRobot model and we can track service health and data drift in one central dashboard (see below).


At this point, it is time to congratulate yourself and to summarize what we did:

  1. Created a new machine learning model with ML.NET.
  2. Created a new ML.NET environment for DataRobot MLOps.
  3. Productionalized a custom ML.NET model with DataRobot MLOps so that it can be consumed via a standardized REST API endpoint regardless of how it was developed (which will make your developers happy).
  4. Because DataRobot MLOps leverages Kubernetes as a runtime environment, we automatically added resilience and redundancy to our deployed model. For example, if there is a hardware failure and subsequently a pod fails, it is automatically re-created within the cluster.
  5. Not only can we consume all of our models in a standardized and approved fashion, but similarly we can monitor all of our models from one central dashboard within DataRobot.
  6. By leveraging DataRobot MLOps with its built-in approval workflows for our ML.NET model we added governance to our ML model deployment workflow, which is critical in highly regulated environments such as banking and insurance.
Labels (1)
Version history
Last update:
‎06-19-2020 04:38 PM
Updated by: