(Part of a model building learning session series.)
Sophisticated machine learning models have a reputation for being accurate yet difficult to interpret; however, you don’t simply have to accept that. In this learning session, we explore interpretability features that help you understand not just what your model predicts, but how it arrives at its predictions.
These tools are important throughout the whole model lifecycle.
Regardless of your role, seeing how the model makes its predictions can help you understand and trust it.
(We've attached the slides from the learning session to this article.)
DataRobot Community articles:
DataRobot Community learning sessions:
DataRobot.com resources:
If you’re a licensed DataRobot customer, search the in-app Platform Documentation for SHAP-based Prediction Explanations and SHAP reference. Also in the documentation search for Understand division.
Have questions not answered during the learning session? Want to continue your conversation with Mark and Rajiv? You can send email to learning_sessions@datarobot.com or post your Comment here. We're looking forward to hearing from you!
You must be a Signed Up user to add a comment. If you've already Signed Up, then sign in. Otherwise, Sign Up and then sign in.