In this accelerator we are going to take an ML model that has been built and refined within DataRobot and deploy it to run within AWS SageMaker. If you already use SageMaker for hosting models, you can still make use of the powerful features of DataRobot, including AutoML and time series modeling--you can integrate DataRobot into your existing deployment processes. Likewise, you can use this to deploy a DataRobot-built model in another environment.
In this accelerator you will take the manual steps that are laid out in our docs page and programmatically go through the end-to-end steps of building a model with DataRobot and then exporting and hosting the model in AWS SageMaker. To help with the setup of AWS services to run the model, this code will also help provision any extra items that you may not haven yet set up.
This is a list of what will be created as part of the code in this AI Accelerator.
- ECR Repository
- S3 Bucket
- IAM Role for SageMaker
- SageMaker inference model
- SageMaker endpoint configuration
- SageMaker endpoint (for real time predictions)
- SageMaker batch transform job (for batch predictions)
- DataRobot AutoML Project
- DataRobot AutoML Models
- Scoring Code JAR file of AutoML Model
Once you have run through the code, you will see how you can leverage the power of DataRobot's automated machine learning capabilities to train a model and then make use of the power of AWS to deploy and host that model in SageMaker. If you found this tutorial helpful and would like to learn more about DataRobot and AWS SageMaker, we encourage you to check out our docs website and explore the many other features and capabilities of these platforms. Whether you are a data scientist, machine learning engineer, or developer, there is something for everyone to learn and leverage in these powerful tools.