Hi, does anyone know the way to set up local validation with AUC? As I understand sklearn auc evaluation metric is not working due to only 0s and 1s in our target. So, are there any other ways to set up CV? Will be glad to receive any tips or snippets of code.
You need to generate the fpr and tpr using the roc curve function after training the model (using the label and the predicted probability of the positive class ). Then calculate the auc with the auc function.
Oh I got it, thank you a lot!
from sklearn.metrics import roc_curve, auc def auc_metric(gt, pred): """ gt: ground truths. Arrays or list pred: Array/List of probabilitiesreturn auc(fpr, tpr) # Example lgb_model = LGBMClassifier(**lgb_params)Thanks, that helps a lot!
I've been using sklearn.metrics.roc_auc_score:
cv_score = roc_auc_score(y_true, y_pred)
y_true - zeros and ones. Works fine so far.