Primary competition visual

agriBORA Commodity Price Forecasting Challenge

Helping Kenya
€8 250 EUR
Completed (2 months ago)
Data analysis
GIS
Time-series
Forecasting
Nowcasting
1230 joined
365 active
Starti
Nov 14, 25
Closei
Dec 27, 25
Reveali
Jan 13, 26
User avatar
Mugisha_
Multimetric evaluation
14 Dec 2025, 11:40 · 2

What's the theoretical and practical justification for having both MAE and RMSE as evaluation metrics for a single predicted target? The normalization of the two doesn't seem to make any mathematical sense to me.

How does it relate back to the end goal? I would imagine if we are trying to optimize an objective function, we choose an evaluation metric and then minimize or maximize that. I'm not sure what the end goal here is to be honest. It would make sense if we had two or more target variables that are evaluated independently and then a weighted score taken to get a single value. I've wanted to participate but this confusion has kept me away.

Discussion 2 answers
User avatar
AJoel
Zindi

Hi @Mugisha_,

You raised a valid concern. You’re right that normally we’d just pick one metric to optimise.

However in real setting, people often look at multiple metrics to see how well a model is performing. This is what the multi-metric evaluation aims to achieve. Each metric tells you something different:

  • MAE shows the average error—basically, how far off predictions are on average.
  • RMSE gives more weight to outliers, so it penalises large errors more heavily.

For this challenge, both MAE and RMSE are combined using a normalisation and weighting process to give a single leaderboard score. The multi-metric evaluation gives a more complete picture of how good your model really is. This choice of metric aligns perfectly with the challenge goal of predicting accurate prices.

You can read more here: Multi-Metric Evaluation on Zindi.

15 Dec 2025, 10:54
Upvotes 1
User avatar
Mugisha_

In practice, for problem specific modeling, problem specific metrics are applied and all modeling is geared towards minimizing or maximizing performance on that metric.

If large errors are more costly for the business, say in financial risk management, then MAE is of no importance to the target outcome, RMSE should be the focus: and vice versa. By bundling both metrics up into a compounded metric, you risk covering up poor rmse scores for instance with good mae scores and versa. therefore having an averagely good model on the leaderboard but unusable in the real world.