Sophisticated machine learning models have a reputation for being accurate yet difficult to interpret; however, you don’t simply have to accept that. In this learning session, we explore interpretability features that help you understand not just what your model predicts, but how it arrives at its predictions.
These tools are important throughout the whole model lifecycle.
If you’re developing a model, you can learn which features matter overall and how your model needs improvement.
If you’re a stakeholder for a model, you can see the patterns that the model discovered and compare them against domain knowledge and business rules.
If you’re using a model in production to help make decisions, you can learn which features were most important in individual cases, and use that as a guide for actionable next steps or interventions.
Regardless of your role, seeing how the model makes its predictions can help you understand and trust it.
Mark Romanowsky (DataRobot, Data Scientist)
Rajiv Shah (DataRobot, Data Scientist)
(We've attached the slides from the learning session to this article.)
If you’re a licensed DataRobot customer, search the in-app Platform Documentation for SHAP-based Prediction Explanations and SHAP reference. Also in the documentation search for Understand division.
Let us know what you think!
Have questions not answered during the learning session? Want to continue your conversation with Mark and Rajiv? You can send email to firstname.lastname@example.org or post your Comment here. We're looking forward to hearing from you!