Xente Fraud Detection Challenge
$4,500 USD
Accurately classify the fraudulent transactions from Xente's e-commerce platform
20 May–22 September 2019 23:59
872 data scientists enrolled, 438 on the leaderboard
What's the metric being used?
published 10 Aug 2019, 13:28
edited ~21 hours later

My score here does not reflect my accuracy, recall or f1 score on my local machine so i would like to ask, what metric is being used?

[Resolved: test scores are now close to test scores from training split.]

The score on Zindi is a score evaluated against a test dataset and not the data set you trained on.

I know this. I was trying to convey that my result here was nothing close to my test results from the training data, and I failed trying to pass that info. This might still seem verbose, but I just need to know the metric Zindi is using to grade this particular dataset.

Oh ok. Well they are using the formula TP/(TP+FP). So this might vary from your personal machine because often the formula 2*(Recall * Precision) / (Recall + Precision) is used by default.