Showing results for 
Search instead for 
Did you mean: 

Explainability on locally trained custom models

Explainability on locally trained custom models

We have a custom NN model trained locally. Are we able to port that model into datarobot and interpret the model predictions (using SHAP or LIME or anything else) in the platform? Thanks.

3 Replies

Hey @arindam,

The docs page on Prediction Explanations for deployment says that SHAP is strictly native DataRobot models. However, XEMP is used by default for all models prediction explanations.


DataRobot Employee
DataRobot Employee

Hi @arindam ,


Take a look at the "External Predictions" capability in the platform and see if it would suit your needs. Documentation on External Predictions can be found below:


Essentially, you can bring in a column of prediction values from an externally trained model such as your custom NN model into the DataRobot platform, then it will be added to the Leaderboard in your Project and then you can run a subset of Evaluate insights on your "external prediction model" such as Lift Chart, ROC Curve (for Binary Classification), Bias and Fairness analysis, etc. You can then also do Model Comparison of your external model against DR-native trained models.


Caveat: Understand/interpretability insights are unsupported for external prediction models because DataRobot platform does not know what training features were used for the externally trained model.


Hope this helps!



Alex Shoop



0 Kudos

Hi @arindam ,


I hope my colleagues responses satisfy your query of explainability of individual predictions.

DataRobot also has Feature Impact and Feature Effects to help understand the models as well Insights generated by different Leaderboard models.

Both of the DataRobot University Foundation classes listed below go over them in detail and you may wish to enroll in one of them to get a better understanding of those methods: 


AutoML I:

DataRobot for Data Scientists:


0 Kudos