Machine learning use cases are all unique, and model deployment doesn't have a "one-size-fits-all" solution. DataRobot provides some incredibly flexible and scalable APIs for deploying models, but what can you do if that approach doesn't fit your needs?
In this session we'll explore DataRobot's exportable scoring code and several ways you can integrate these models into your data pipelines to achieve real operational value.
Some topics we'll cover:
When it makes sense to use scoring code vs the prediction APIs
The basic internal structure of the scoring code packages
Basic scoring functionality
Scoring large datasets via Spark
Custom integration, including MLOps
Brent Hinks (DataRobot, AI Engineer)
Rajiv Shah (DataRobot, Customer Facing Data Scientist)
Jack Jablonski (DataRobot, AI Success Manager)
After watching the learning session, you should check out these resources for more information.
DataRobot licensed customers: search in-app Platform Documentation for Scoring Code and Batch Prediction API.
Let us know what you think
Have questions not answered during the learning session? Want to continue your conversation with Brent and Rajiv? Post Your Comment here or send email to email@example.com. We look forward to hearing from you!
Need a Tip? DataRobot experts are putting together some helpful DataRobot usage tips for the platform, trial, features, etc. You can find these easily in the Tip of the Day board (under Read). Let us know if you've found a good one or have a good one to add!