Machine learning use cases are all unique, and model deployment doesn't have a "one-size-fits-all" solution. DataRobot provides some incredibly flexible and scalable APIs for deploying models, but what can you do if that approach doesn't fit your needs?
In this session we'll explore DataRobot's exportable scoring code and several ways you can integrate these models into your data pipelines to achieve real operational value.
Some topics we'll cover:
When it makes sense to use scoring code vs the prediction APIs
The basic internal structure of the scoring code packages
Basic scoring functionality
Scoring large datasets via Spark
Custom integration, including MLOps
Brent Hinks (DataRobot, AI Engineer)
Rajiv Shah (DataRobot, Customer Facing Data Scientist)
Jack Jablonski (DataRobot, AI Success Manager)
After watching the learning session, you should check out these resources for more information.
DataRobot licensed customers: search in-app Platform Documentation for Scoring Code and Batch Prediction API.
Let us know what you think
Have questions not answered during the learning session? Want to continue your conversation with Brent and Rajiv? Post Your Comment here or send email to firstname.lastname@example.org. We look forward to hearing from you!
We're excited to welcome Zepl and its employees and customers to DataRobot! The acquisition of Zepl and integration of its self-service data science notebook solution will provide additional flexibility for data scientists who prefer to code. In her blog article, Tricia Lee explains how you can check out Zepl today. Please have a look, give it a spin, and let us know what you think.