Dear Zindians,
Well done on competiting in a particularly complex challenge!
Zindi remains committed to delivering valuable solutions to our clients and partners.
While we uphold high standards of usability and value, we also recognize the complexity
of this challenge and appreciate the dedication and effort each of you put into your
submissions.
Following our review of the CGIAR Root Volume Estimation Challenge, winners were
selected based on the following criteria:
We sincerely appreciate your participation and hard work in this challenge.
Congratulations to everyone 🎉
Seems like the rules were changed after the challenge - adding subjective criteria. These should have been declared at the beginning of the challenge.
But what is done is done. Terrible competitions happen from time to time.
This really hurts me 🫤. The rules were changed after the competition. The client could have used the top 30 solutions that seemed more appropriate since He only wants image models. But changing the scores seems intolerable to me. The leaderboard should be the leaderboard. The private top 3 score seems artificial to me, just to get them ahead. I'm going to leave the platform for a while.
Oh you don't have to. It happens from time to time. Please don't leave the platform. There are more competitions to come. I am sure they will go over the rules for the next competitions.
Thank you @CodeJoe. I will get over it soon
Inserting private scores of 1, 1.1 and 1.2 is just bad. Terrible judgment call there. You could have just announced prize winners without creating a fake PB.
Exactly, they could have given their money to whoever they thought deserved it, why taint the leaderboard with fake scores? We compete to build the best models and this isn't something the organisers can decide based on preference.
As I said. This is roberry. It is no more science but taste. Had we know it in the beginning, we wouldn't had had competed.
It's sad really. Didn't think this sort of thing could happen, all the same we'll continue to build the best models we can and hope for fair judgement 😔
Yes Lad, don't stop. You did amazingly well. More competitions are there. Keep on building the best models you can.
Thanks man, see you at the top of the leaderboard in competitions to come 👊
Game on Brudda 🙌 ! Let's do this.
The competition really felt very strange, the winner was not judged based on the private score, but based on the source code assessment when the competition was over.
It is Roberry again --- (i.e. Smart Energy Supply Scheduling for Green Telecom Challenge )
I don't really understand what the end of this competition rule is. My model uses image encode with timm and combines it with start, end, genotype and stage values and uses lgbm for the final prediction. Does Zindi only want a solution using the image without using other values (start, end, genotype, stage)?, or must use CNN?, or something else. Please explain so that we can anticipate problems related to the rules that may recur in future competitions. thank you for your response.
That's being said. You were robed @sys_ts__. You was in the inital top 3 ? To me, it is an image based solution. I think they should create a downvote button.
No, i was initial top 4
I second this point:
Please explain so that we can anticipate problems related to the rules that may recur in future competitions. thank you for your response.
Hey @AJoel could the winning solutions be open sourced so that we can learn from them? Atleast that will bring tranparency to the below issue which I think was the main judging point?
Correctness of Preprocessing: Image merging must be performed accurately, ensuring proper alignment of the left and right image segments and stacking the images vertically.
@zinidi painful, funny and laughable at the same time....🤣🤣🤣🤣🤣 Zindiiiiiiiiiiii not again..... Can @zindi respond to this simple question below coming from @sys_ts__ :
Please explain so that we can anticipate problems related to the rules that may recur in future competitions. thank you for your response.
if zindi selected less sentitive to outliers metric for evaluation small test sample, this couldn't happened.
couldn't agree more. I sometimes don't understand the choice of Zindi metrics at times.
In my opinion, Zindi should have removed the Start, End, Genotype and Stage columns from the Test data from the start of the competition (or they forgot to remove them), because it seems they reject the use of these columns for prediction, Zindi only allows images to be used as the only feature. So the existence of these columns in the Test data causes confusion.
Dear Zindians,
We would like to address a few of the concerns raised in this discussion thread, in the interest of transparency and constructive communication with the community. In addition, we outline some platform changes that we are planning to ensure better transparency in future. First let me say, we really appreciate all your messages and engagement, even if some of it is critical of the platform! That shows us that you all really care, and as always we will try our best to be open and honest with you.
Rules changes: We would like to clarify that we made no rules changes during or after this challenge. As always, the rules of Zindi challenges state that the solutions must be useful to the client according to the terms and rules laid out in the challenge. Everyone whose ranking went down in this challenge were deemed to have submitted solutions that would not help the client solve the intended challenge. We encourage all participants to read the rules of every challenge carefully, and remember that we are always going to prioritise solutions that are of real use to clients rather than simply the highest scoring submissions on the leaderboard. We always try to communicate these criteria as clearly as possible.
Reasoning for selecting the winners: Regarding the question of @sys_ts__ and others, winners were selected on the basis of usefulness to the partner organisation, as per this rule:
In this case, most submissions in the top 30 failed in this regard, in one of two ways:
The above rule supercedes all others in all Zindi competitions, so we encourage you to always think carefully about the usefulness of your models in a real-world situation.
Manually-adjusted scoring on the leaderboard: Many of you have noticed that the top 3 ranks have been manually adjusted. This was done to ensure that the best solutions ranked above all of the disqualified solutions. This is a limitation of the platform - manually adjusting scores is currently the only way to adjust rankings. We acknowledge that this is a poor solution, and we are taking steps to address this - please see below.
Selection of error metric: The error metric used was specifically requested by the client. We agree that the metric chosen was not ideal for this challenge, and as always we will learn from this challenge when choosing error metrics in future.
Future changes to address these issues: Your concerns around the handling of this challenge leaderboard are valid and have been heard. To avoid such issues in future, we will be rolling out the following features soon:
As always, we appreciate your concerns and assure you all that we are always trying our best to make this platform the best it can be for all of you as well as for all of our partners. If you have any comments or questions, please do share them.
Happy hacking!
Thank you for your complete explanation Amy.