This ID was one of my potential outliers. It seems that the outliers, which we already knew existed, were actually not counted towards the final score. It's a strange decision by Zindi.
Is it like, the values where different models give very different results are suspicious, and then you waste a submission to change the value and see if score changes?
in simple terms: Lb probing but thats easy for the public test set, the hard part is for the private test set, now that requires skills, and lots of hoping you made the right decision but for a guy like @kamelyamani he is good at it, and he has promised to share how he did it
Thank you @tomy4reel I have also pointed this out in my updated discussion.
https://zindi.africa/competitions/digital-green-crop-yield-estimate-challenge/discussions/19700
This ID was one of my potential outliers. It seems that the outliers, which we already knew existed, were actually not counted towards the final score. It's a strange decision by Zindi.
Because of the metric, we're asked to make a model that predicts mainly the outliers. Then proceeds to remove them from the test data :)
haha
Thank you @tomy4reel to pointing out this issue !
How do you detect outliers in the test sets?
Is it like, the values where different models give very different results are suspicious, and then you waste a submission to change the value and see if score changes?
in simple terms: Lb probing but thats easy for the public test set, the hard part is for the private test set, now that requires skills, and lots of hoping you made the right decision but for a guy like @kamelyamani he is good at it, and he has promised to share how he did it
@db @Koleshjr I’ve just shared my solution here: https://zindi.africa/competitions/digital-green-crop-yield-estimate-challenge/discussions/19719