
DataRobot features an in-depth API that allows data scientists to produce fully automated workflows in their coding environment of choice. This accelerator shows how to enable end-to-end processing of data stored natively in Azure.
About this Accelerator
In this notebook you'll see how data stored in Azure can be used to train a collection of models on DataRobot. You'll then deploy a recommended model and use DataRobot's batch prediction API to produce predictions and write them back to the source Azure container.
What you will learn
- Acquiring a training dataset from an Azure storage container
- Building a new DataRobot project
- Deploying a recommended model
- Scoring via batch prediction API
- Writing results back to the source Azure container