cancel
Showing results for 
Search instead for 
Did you mean: 

Retrieve Sensitivity/Specificity of multiple projects

Retrieve Sensitivity/Specificity of multiple projects

Hi all,

 

I am trying to retrieve a cumulative Leaderboard via API for multiple projects and models within them that includes Sens./Spec. metrics. 

Any suggestion to obtain that?

This is part of the code:

@christian 

 

# Import modules

import datarobot as dr
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import re
import os
from datarobot import ExternalRocCurve
from datarobot.errors import ClientError
from datarobot.utils import from_api
sns.set_style('ticks')
sns.set_context('poster')

 

#Connect to API

dr.Client(config_path='~/drconfig.yaml')

 

#Retrieve all projects
for project in dr.Project.list()[0:200]:
print(project, project.id)

 

#Print leaderboard
models = project.get_models()
for idx, model in enumerate(models):
print('[{}]: {} - {} - {} - {} - {} - {} - {} - {} - {} - {} - {} - {}'.
format(idx, model.metrics['AUC']['validation'], model.metrics['AUC']['crossValidation'], model.metrics['AUC']['holdout'], model.metrics['LogLoss']['validation'], model.metrics['LogLoss']['crossValidation'], model.metrics['LogLoss']['holdout'] ,model.model_type, model.id, model.sample_pct, model.blueprint_id, model.model_type, model.prediction_threshold))

Labels (1)
2 Solutions

Accepted Solutions

As both Sensitivity/Specificity depend on confusion matrix, which depends on the selected threshold so there is another path to obtain them:

roc = mod0.get_roc_curve('holdout')
threshold = roc.get_best_f1_threshold()
metrics = roc.estimate_threshold(threshold)
sensitivity = metrics['true_positive_rate']
specificity = metrics['true_negative_rate']

you can read more on this topic here:
Confusion matrix 

View solution in original post

dalilaB
DataRobot Alumni

Please find the solution in the this jupyter notebook file.  

View solution in original post

2 Replies

As both Sensitivity/Specificity depend on confusion matrix, which depends on the selected threshold so there is another path to obtain them:

roc = mod0.get_roc_curve('holdout')
threshold = roc.get_best_f1_threshold()
metrics = roc.estimate_threshold(threshold)
sensitivity = metrics['true_positive_rate']
specificity = metrics['true_negative_rate']

you can read more on this topic here:
Confusion matrix 

dalilaB
DataRobot Alumni

Please find the solution in the this jupyter notebook file.