No luck was in this big man. You worked super hard and really deserve this win! I'm so happy for you🎉🎉🙌🙌. Congratulations once again🔥🙌🎉. @marching_learning
@CodeJoe. I say lucky because despite the efforts, we don't have to overrate the leaderbord. Actually (under the confirmation of other top 10 teams), our models are performing poorly. Because the simple average class submission scores around 0.742xxx (5th on private LB), so that 0.70xxx is not such a big score. This causes frustration to me. Because the other datasets we have may not line up the competition data. We should have had a minimal training dataset. It puts up in potential scenario where average model and good data could have had or has stolen the show. I will post my write up. To me using Big GeoFM for a simple 0.05xxx over a constant model is a bit harsh to take.
The lack of input information about the test set makes this competition less meaningful. With a logloss greater than 0.7, the competition model would need significant improvement before it could be considered usable. I suspect that some of the top-ranked teams may have probed the leaderboard using constant prediction. For example, a submission with values such as cocoa = 0.75, oil = 0.125, rubber = 0.125 could achieve a leaderboard score of 0.73. Combining amini FM features, and some postprocessing tricks might help achieve LB score~0.70, but such approaches would violate the rules and lead to disqualification from the final leaderboard.
Can't tell. Let's just wait😅
Well, there was no shake up
Finally, @marching_learning won!!!! Congratulations Big man🎉🎉🎉🎉🎉🎉
Congrats @marching_learning!
Congrats @machine_learning,
Thank you guys, I was very lucky. I almost selected 0.74xxx sub
Congratulations @machine_learning
No luck was in this big man. You worked super hard and really deserve this win! I'm so happy for you🎉🎉🙌🙌. Congratulations once again🔥🙌🎉. @marching_learning
@CodeJoe. I say lucky because despite the efforts, we don't have to overrate the leaderbord. Actually (under the confirmation of other top 10 teams), our models are performing poorly. Because the simple average class submission scores around 0.742xxx (5th on private LB), so that 0.70xxx is not such a big score. This causes frustration to me. Because the other datasets we have may not line up the competition data. We should have had a minimal training dataset. It puts up in potential scenario where average model and good data could have had or has stolen the show. I will post my write up. To me using Big GeoFM for a simple 0.05xxx over a constant model is a bit harsh to take.
Yes I actually saw that and didn't place too much effort on this competition. That doesn't mean your effort shouldn't be recognized.
Thank you in advance for the write up😅.
Congratulations @machine_learning
The lack of input information about the test set makes this competition less meaningful. With a logloss greater than 0.7, the competition model would need significant improvement before it could be considered usable. I suspect that some of the top-ranked teams may have probed the leaderboard using constant prediction. For example, a submission with values such as cocoa = 0.75, oil = 0.125, rubber = 0.125 could achieve a leaderboard score of 0.73. Combining amini FM features, and some postprocessing tricks might help achieve LB score~0.70, but such approaches would violate the rules and lead to disqualification from the final leaderboard.
I probed the LB to get those weights. yes it garanties 0.73xx score with just 3 points.