Primary competition visual

UNIDO AfricaRice Quality Assessment Challenge

Helping Ghana
$5 000 USD
Completed (~1 month ago)
Computer Vision
Object Detection
487 joined
203 active
Starti
Dec 24, 25
Closei
Feb 01, 26
Reveali
Feb 02, 26
User avatar
Koleshjr
Multimedia university of kenya
Can we get the function used to display the overall score?
Help · 20 Jan 2026, 11:24 · 2

Hello @Ajoel and @meganomaly

Is it possible to get the function that displays the overall scores so that we can replicate it locally?

Discussion 2 answers
User avatar
stefan027

Hey @Koleshjr, agree it would be great to get the function. Your question made me wonder if I could reverse engineer the formula. Zindi describes the approach here: https://zindi.africa/learn/introducing-multi-metric-evaluation-or-one-metric-to-rule-them-all

The formula is x_norm = 1 - (x - x_min) / (x_max - x_min)

For regression problems with MAE, x_min is zero. To determine x_max, they say the following: "Regression metrics behave differently because a value of zero represents a perfect score, and the upper bound can grow indefinitely. To keep these metrics comparable, Zindi sets the x_max value to the metric score of the starter notebook on the leaderboard."

There isn't a starter notebook for this challenge, so it's unclear how x_max was determined. However, we can reverse engineer it using the MAE scores for each variable and solving a system of linear equations. We can write the x_norm formula as ax = b:

x_norm = 1 - (x - x_min) / (x_max - x_min)

=> x_norm = 1 - x / x_max

=> x / x_max = 1 - x_norm

=> ax = b

I used the top 15 scores on the LB to construct a set of linear equations and used np.linalg.solve to solve. I get the following coefficients:

array([0.00066607, 0.00066607, 0.00066607, 0.00066609, 0.00066608, 0.00066604, 0.00066607, 0.00067273, 0.00067607, 0.00066607, 0.00066607, 0.00066606, 0.00066607, 0.00066607, 0.00066606])

All the coefficients are very similar, so it looks like they maybe used the same x_max for all 15 variables. Anyway, the following function should give you the metric:

def compute_score(scores: dict):
    B = np.array([0.00066607, 0.00066607, 0.00066607, 0.00066609, 0.00066608,
        0.00066604, 0.00066607, 0.00067273, 0.00067607, 0.00066607,
        0.00066607, 0.00066606, 0.00066607, 0.00066607, 0.00066606])
    keys = ['Red Count', 'Yellow Count', 'Green Count', 'WK Length Average',
            'WK Width Average', 'WK LW Ratio Average', 'Average L', 'Average A',
            'Count', 'Long Count', 'Broken Count', 'Medium Count', 'Black Count',
            'Chalky Count', 'Average B']
    scores = np.array([scores[k] for k in keys])
    return 1 - (scores * B).sum()

The input is a dict like this (using your current LB scores):

{'Red Count': 10.21086261,
 'Yellow Count': 25.94759749,
 'Green Count': 17.6965628,
 'WK Length Average': 0.236529927,
 'WK Width Average': 0.104787856,
 'WK LW Ratio Average': 0.13092911,
 'Average L': 1.3064739,
 'Average A': 0.797080489,
 'Count': 41.34808324,
 'Long Count': 42.10593769,
 'Broken Count': 53.0148157,
 'Medium Count': 0.650324533,
 'Black Count': 45.89132281,
 'Chalky Count': 47.4409819,
 'Average B': 1.681566058}

This gives 0.8073775044386595 which is accurate to 5 decimal places.

Now I'm done with my procrastination for the afternoon :)

20 Jan 2026, 13:48
Upvotes 8
User avatar
Koleshjr
Multimedia university of kenya

thanks @stefan027

this is very helpfulll!!!!!