Confusion from Scores on LB?
"Hi everyone, I wanted to open a quick discussion to better understand the leaderboard scoring mechanism.The competition overview states the metric is a weighted average of MAE and RMSE. Typically, this means 'Lower is Better' (closer to 0 is perfect). However, looking at the top leaderboard scores (around ~0.96) versus the scores I'm seeing in my local validation (which are significantly lower/different), I'm finding the scale ambiguous.Could anyone (or the Zindi team) clarify:
Is the leaderboard score strictly (MAE * 0.5) + (RMSE * 0.5)?
Is there any scaling or normalization applied to the target columns before scoring?Thanks for the help!"
Hi @Ojuka,
Normalisation is applied. I will suggest you follow the following discussion thread: https://zindi.africa/competitions/agribora-commodity-price-forecasting-challenge/discussions/29381 and read the following post to get a better idea of how multi-metric evaluation is performed https://zindi.africa/learn/introducing-multi-metric-evaluation-or-one-metric-to-rule-them-all