Primary competition visual

CGIAR Root Volume Estimation Challenge

Helping Africa
$15 000 USD
Completed (~1 year ago)
Computer Vision
Prediction
1064 joined
257 active
Starti
Jan 24, 25
Closei
Mar 09, 25
Reveali
Mar 10, 25
User avatar
AJoel
Zindi
Leaderboard Sealed
Connect · 1 Apr 2025, 13:30 · 24

Dear Zindians,

Well done on competiting in a particularly complex challenge!

Zindi remains committed to delivering valuable solutions to our clients and partners.

While we uphold high standards of usability and value, we also recognize the complexity

of this challenge and appreciate the dedication and effort each of you put into your

submissions.

Following our review of the CGIAR Root Volume Estimation Challenge, winners were

selected based on the following criteria:

  • Usability & Usefulness: The proposed solution must be practical and valuable for the client (feature selection / engineering).

  • Correctness of Preprocessing: Image merging must be performed accurately, ensuring proper alignment of the left and right image segments and stacking the images vertically.

We sincerely appreciate your participation and hard work in this challenge.

Congratulations to everyone 🎉

Discussion 24 answers
User avatar
nymfree

Seems like the rules were changed after the challenge - adding subjective criteria. These should have been declared at the beginning of the challenge.

But what is done is done. Terrible competitions happen from time to time.

1 Apr 2025, 13:40
Upvotes 5
User avatar
wizzard

This really hurts me 🫤. The rules were changed after the competition. The client could have used the top 30 solutions that seemed more appropriate since He only wants image models. But changing the scores seems intolerable to me. The leaderboard should be the leaderboard. The private top 3 score seems artificial to me, just to get them ahead. I'm going to leave the platform for a while.

1 Apr 2025, 13:47
Upvotes 5
User avatar
CodeJoe

Oh you don't have to. It happens from time to time. Please don't leave the platform. There are more competitions to come. I am sure they will go over the rules for the next competitions.

User avatar
wizzard

Thank you @CodeJoe. I will get over it soon

User avatar
nymfree

Inserting private scores of 1, 1.1 and 1.2 is just bad. Terrible judgment call there. You could have just announced prize winners without creating a fake PB.

User avatar
offei_lad
University of mines and technology

Exactly, they could have given their money to whoever they thought deserved it, why taint the leaderboard with fake scores? We compete to build the best models and this isn't something the organisers can decide based on preference.

User avatar
marching_learning
Nostalgic Mathematics

As I said. This is roberry. It is no more science but taste. Had we know it in the beginning, we wouldn't had had competed.

User avatar
offei_lad
University of mines and technology

It's sad really. Didn't think this sort of thing could happen, all the same we'll continue to build the best models we can and hope for fair judgement 😔

User avatar
CodeJoe

Yes Lad, don't stop. You did amazingly well. More competitions are there. Keep on building the best models you can.

User avatar
offei_lad
University of mines and technology

Thanks man, see you at the top of the leaderboard in competitions to come 👊

User avatar
CodeJoe

Game on Brudda 🙌 ! Let's do this.

The competition really felt very strange, the winner was not judged based on the private score, but based on the source code assessment when the competition was over.

User avatar
marching_learning
Nostalgic Mathematics

It is Roberry again --- (i.e. Smart Energy Supply Scheduling for Green Telecom Challenge )

1 Apr 2025, 14:41
Upvotes 2

I don't really understand what the end of this competition rule is. My model uses image encode with timm and combines it with start, end, genotype and stage values ​​and uses lgbm for the final prediction. Does Zindi only want a solution using the image without using other values ​​(start, end, genotype, stage)?, or must use CNN?, or something else. Please explain so that we can anticipate problems related to the rules that may recur in future competitions. thank you for your response.

1 Apr 2025, 15:49
Upvotes 2
User avatar
marching_learning
Nostalgic Mathematics

That's being said. You were robed @sys_ts__. You was in the inital top 3 ? To me, it is an image based solution. I think they should create a downvote button.

No, i was initial top 4

User avatar
Koleshjr
Multimedia university of kenya

I second this point:

Please explain so that we can anticipate problems related to the rules that may recur in future competitions. thank you for your response.

User avatar
Koleshjr
Multimedia university of kenya

Hey @AJoel could the winning solutions be open sourced so that we can learn from them? Atleast that will bring tranparency to the below issue which I think was the main judging point?

Correctness of Preprocessing: Image merging must be performed accurately, ensuring proper alignment of the left and right image segments and stacking the images vertically.

1 Apr 2025, 16:04
Upvotes 2
User avatar
MICADEE
LAHASCOM

@zinidi painful, funny and laughable at the same time....🤣🤣🤣🤣🤣 Zindiiiiiiiiiiii not again..... Can @zindi respond to this simple question below coming from @sys_ts__ :

Please explain so that we can anticipate problems related to the rules that may recur in future competitions. thank you for your response.

if zindi selected less sentitive to outliers metric for evaluation small test sample, this couldn't happened.

2 Apr 2025, 07:06
Upvotes 1
User avatar
marching_learning
Nostalgic Mathematics

couldn't agree more. I sometimes don't understand the choice of Zindi metrics at times.

In my opinion, Zindi should have removed the Start, End, Genotype and Stage columns from the Test data from the start of the competition (or they forgot to remove them), because it seems they reject the use of these columns for prediction, Zindi only allows images to be used as the only feature. So the existence of these columns in the Test data causes confusion.

User avatar
Amy_Bray
Zindi

Dear Zindians,

We would like to address a few of the concerns raised in this discussion thread, in the interest of transparency and constructive communication with the community. In addition, we outline some platform changes that we are planning to ensure better transparency in future. First let me say, we really appreciate all your messages and engagement, even if some of it is critical of the platform! That shows us that you all really care, and as always we will try our best to be open and honest with you.

Rules changes: We would like to clarify that we made no rules changes during or after this challenge. As always, the rules of Zindi challenges state that the solutions must be useful to the client according to the terms and rules laid out in the challenge. Everyone whose ranking went down in this challenge were deemed to have submitted solutions that would not help the client solve the intended challenge. We encourage all participants to read the rules of every challenge carefully, and remember that we are always going to prioritise solutions that are of real use to clients rather than simply the highest scoring submissions on the leaderboard. We always try to communicate these criteria as clearly as possible.

Reasoning for selecting the winners: Regarding the question of  @sys_ts__ and others, winners were selected on the basis of usefulness to the partner organisation, as per this rule:

Zindi is committed to providing solutions of value to our clients and partners. To this end, we reserve the right to disqualify your submission on the grounds of usability or value. This includes but is not limited to the use of data leaks or any other practices that we deem to compromise the inherent value of your solution.

In this case, most submissions in the top 30 failed in this regard, in one of two ways:

  1. some models used plant number as a feature, which does not hold predictive value.
  2. many models pre-processed images incorrectly, leading to meaningless predictions of plant volume.

The above rule supercedes all others in all Zindi competitions, so we encourage you to always think carefully about the usefulness of your models in a real-world situation.

Manually-adjusted scoring on the leaderboard: Many of you have noticed that the top 3 ranks have been manually adjusted. This was done to ensure that the best solutions ranked above all of the disqualified solutions. This is a limitation of the platform - manually adjusting scores is currently the only way to adjust rankings. We acknowledge that this is a poor solution, and we are taking steps to address this - please see below.

Selection of error metric: The error metric used was specifically requested by the client. We agree that the metric chosen was not ideal for this challenge, and as always we will learn from this challenge when choosing error metrics in future.

Future changes to address these issues: Your concerns around the handling of this challenge leaderboard are valid and have been heard. To avoid such issues in future, we will be rolling out the following features soon:

  • We will create a visible change log for each challenge, to ensure transparency regarding any changes made during the challenge period
  • We will create a function to allow reranking of the final leaderboard, without changing private leaderboard scoring
  • We will create a way to indicate any manual rank changes made on the leaderboard, to ensure transparency regarding any changes made after the challenge period

As always, we appreciate your concerns and assure you all that we are always trying our best to make this platform the best it can be for all of you as well as for all of our partners. If you have any comments or questions, please do share them.

Happy hacking!

9 Apr 2025, 10:02
Upvotes 4

Thank you for your complete explanation Amy.