Showing results for 
Search instead for 
Did you mean: 

Retrieve Sensitivity/Specificity of multiple projects

Retrieve Sensitivity/Specificity of multiple projects

Hi all,


I am trying to retrieve a cumulative Leaderboard via API for multiple projects and models within them that includes Sens./Spec. metrics. 

Any suggestion to obtain that?

This is part of the code:



# Import modules

import datarobot as dr
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import re
import os
from datarobot import ExternalRocCurve
from datarobot.errors import ClientError
from datarobot.utils import from_api


#Connect to API



#Retrieve all projects
for project in dr.Project.list()[0:200]:


#Print leaderboard
models = project.get_models()
for idx, model in enumerate(models):
print('[{}]: {} - {} - {} - {} - {} - {} - {} - {} - {} - {} - {} - {}'.
format(idx, model.metrics['AUC']['validation'], model.metrics['AUC']['crossValidation'], model.metrics['AUC']['holdout'], model.metrics['LogLoss']['validation'], model.metrics['LogLoss']['crossValidation'], model.metrics['LogLoss']['holdout'] ,model.model_type,, model.sample_pct, model.blueprint_id, model.model_type, model.prediction_threshold))

Labels (1)
2 Replies
DataRobot Alumni

Please find the solution in the this jupyter notebook file.  

As both Sensitivity/Specificity depend on confusion matrix, which depends on the selected threshold so there is another path to obtain them:

roc = mod0.get_roc_curve('holdout')
threshold = roc.get_best_f1_threshold()
metrics = roc.estimate_threshold(threshold)
sensitivity = metrics['true_positive_rate']
specificity = metrics['true_negative_rate']

you can read more on this topic here:
Confusion matrix