Hi, I have some business stakeholders asking me how confident are we in regards to a particular model prediction. 1. Does Datarobot provide confidence level of a particular prediction?
2. Just want to hear you thoughts on how you gain the trust of your stakeholders in using the model. Thanks!
1. I am not sure what you mean by confidence levels, could you elaborate? Is this a specific metric you are looking for?
2. There are a number of ways that we make it easy to be confident about model performance. Briefly, DataRobot uses a 5 fold cross validation and 20% holdout by default for all models. This controls for both sampling bias and overfitting. The platform also generates evacuation metrics like lift charts, ROC curves and confusion matrices. You also get model agnostic interpretability tools that look at feature impact and feature effects. These help you understand and communicate what is happening globally with your model. Additionally, you also get up to 10 row by row prediction explanations for every row in your dataset. This allows you to really zoom in and see how the model performs case by case.
I hope this helps answer your questions. Please feel free to reach out anytime.
Just wondering if it makes sense to provide user something like (let's say a binary classification problem): "I predict the probability of target X happening is between 50% and 70% for case Y", due to low model accuracy for such cases?
Thanks for the thoughts on item #2. I agree this could be provided readily from Datarobot platform.
Depending on the stakeholders you are trying to build confidence with (e.g. other data scientists vs. business executives vs. front line consumers of predictions) the answer may be a bit different. If you provide a bit more context on the audience and their anticipated concerns we might be able to provide additional ideas.
I can let some other members speak specifically to the AutoML product, but if you are working with a time series problem, the Automated Time Series product produces a prediction interval for predictions which is displayed in blue in the graphs around the predictions. By default the prediction interval is the range in which 80% of the predictions are expected to fall, however the prediction intervals can be configured at prediction time to be different. This is similar to a confidence interval, but the prediction interval is based on the residual errors that are measured from the models during training.