The way the code review is handled really hurts 🤦. We chose the submission that is supposed to generalize well. Since people out of the top 10 are contacted. It means our solution are disqualified. If there is fairness, Zindi should consider the best models including images. Znd they should have said it earlier. RMSE was a bad choice. With MAE, I think strong image models will have been rewarded. It's like the rules of the game changed after the game.
It would be terrible If they didn't consider actual private board results at the end. It's a competition based on a metric, however bad the metric might be, and the winner should be selected solely on the metric - regardless of what model was used.
I hope that zindi is not extending the number of invitations in the hope of having a "computer vision" solution.
On this, I think I support @wizzard that @zindi should also ask for image-based approach from top 10. Even though i used image-based approach in my solution that i sent to zindi.
And again, @wizzard Note that: Not all top 10 solutions are purely tabular-based approach. So i disgaree, it's not possible for zindi to disqualify all top 10. Mine approach is based on 'Image processing (e.g Segmentation, using thresholding etc.) + Catboost modelling" approach. However, i have another approach that's strictly Computer Vision based all through using "Vit_base_patch16_384" with another reasonable score on LB as well.
My second pick was also a computer vision model which comfortably scores top 10 but that's not what was submitted. I think I understand why Zindi cannot ask those of us who submitted tabular data for our computer vision model as some might gmcreate new models and tweak it to the private leaderboard before submission
especially me. I have the best private lb 1.18 sub over all teams, which is an image model, but i didn't select. I know it is the best because when the competition just closed the lb is ranked by best private lb, selected or not, and that sub ranked me top 1. I was quite confused then :P
@Koleshjr@CodeJoe Thank you for the support! I probably will make one sometime but please be patient. There are quite a few interesting on-going competitions right now :)
@Ajoel Please, was this done because the top 10 didn't qualify? Or it is to see whether there are innovative solutions in the top 30 and the top 10 still qualify?
We are still in the code review process. The fact that a new submission email went out does not imply that the top 10 didn't qualify. The idea is simply to review a few more solutions. First of all to get a better idea of the approaches taken and also to review for usability.
no, what your rank?
28
Must mean there's gonna be a shake up after the shake up😂
:-P
same here i also recevied the mail and i am not in top10 (22)
what are your mae cv score?
Yes I received the email as well.
The way the code review is handled really hurts 🤦. We chose the submission that is supposed to generalize well. Since people out of the top 10 are contacted. It means our solution are disqualified. If there is fairness, Zindi should consider the best models including images. Znd they should have said it earlier. RMSE was a bad choice. With MAE, I think strong image models will have been rewarded. It's like the rules of the game changed after the game.
It would be terrible If they didn't consider actual private board results at the end. It's a competition based on a metric, however bad the metric might be, and the winner should be selected solely on the metric - regardless of what model was used.
I hope that zindi is not extending the number of invitations in the hope of having a "computer vision" solution.
On this, I think I support @wizzard that @zindi should also ask for image-based approach from top 10. Even though i used image-based approach in my solution that i sent to zindi. And again, @wizzard Note that: Not all top 10 solutions are purely tabular-based approach. So i disgaree, it's not possible for zindi to disqualify all top 10. Mine approach is based on 'Image processing (e.g Segmentation, using thresholding etc.) + Catboost modelling" approach. However, i have another approach that's strictly Computer Vision based all through using "Vit_base_patch16_384" with another reasonable score on LB as well.
By the looks of it that is exactly what is happening
My second pick was also a computer vision model which comfortably scores top 10 but that's not what was submitted. I think I understand why Zindi cannot ask those of us who submitted tabular data for our computer vision model as some might gmcreate new models and tweak it to the private leaderboard before submission
Yeah. every other participant might have a computer vision model they did not select. That's a whole other can of worms.
especially me. I have the best private lb 1.18 sub over all teams, which is an image model, but i didn't select. I know it is the best because when the competition just closed the lb is ranked by best private lb, selected or not, and that sub ranked me top 1. I was quite confused then :P
With that said, I fully share your frustration @wizzard @offei_lad
And i think you totally deserve the prize. The game is a game and it is done. You win fair and square.
Yeah I saw that. I guess there's going to be a round 2 :)
I would love to learn from this image solution.
A video maybe?
Yes, @snow it will be great if you could do a video on the image solution. 🔥 After the review.
@Koleshjr @CodeJoe Thank you for the support! I probably will make one sometime but please be patient. There are quite a few interesting on-going competitions right now :)
no stress take your time
We will wait. Feel free.
I got one too. i'm 11.
Same here. Also got one
Hi @Shapu, this is not a bug. I sent an email to everyone at position 11-30 on the leaderboard.
@Ajoel Please, was this done because the top 10 didn't qualify? Or it is to see whether there are innovative solutions in the top 30 and the top 10 still qualify?
Hi @CodeJoe,
We are still in the code review process. The fact that a new submission email went out does not imply that the top 10 didn't qualify. The idea is simply to review a few more solutions. First of all to get a better idea of the approaches taken and also to review for usability.
Great. Thank you for the clarification.