Primary competition visual

Amini GeoFM Decoding the Field Challenge

Helping Africa
$8 500 USD
Completed (8 months ago)
Classification
798 joined
153 active
Starti
Jun 10, 25
Closei
Jul 20, 25
Reveali
Jul 21, 25
User avatar
Belal_Emad
Cairo university
Massive shake up?
20 Jul 2025, 21:26 · 12

According to my experience in the competition, and due to absence of training data, I expect there will be a huge shake up.. what about your opinion?

Discussion 12 answers
User avatar
CodeJoe

Can't tell. Let's just wait😅

20 Jul 2025, 23:05
Upvotes 1
User avatar
Ebiendele
Federal university of technology akure

Well, there was no shake up

21 Jul 2025, 00:22
Upvotes 0
User avatar
CodeJoe

Finally, @marching_learning won!!!! Congratulations Big man🎉🎉🎉🎉🎉🎉

21 Jul 2025, 00:24
Upvotes 4
User avatar
Belal_Emad
Cairo university
User avatar
ML_Wizzard
Nasarawa State University

Congrats @machine_learning,

User avatar
marching_learning
Nostalgic Mathematics

Thank you guys, I was very lucky. I almost selected 0.74xxx sub

Congratulations @machine_learning

User avatar
CodeJoe

No luck was in this big man. You worked super hard and really deserve this win! I'm so happy for you🎉🎉🙌🙌. Congratulations once again🔥🙌🎉. @marching_learning

User avatar
marching_learning
Nostalgic Mathematics

@CodeJoe. I say lucky because despite the efforts, we don't have to overrate the leaderbord. Actually (under the confirmation of other top 10 teams), our models are performing poorly. Because the simple average class submission scores around 0.742xxx (5th on private LB), so that 0.70xxx is not such a big score. This causes frustration to me. Because the other datasets we have may not line up the competition data. We should have had a minimal training dataset. It puts up in potential scenario where average model and good data could have had or has stolen the show. I will post my write up. To me using Big GeoFM for a simple 0.05xxx over a constant model is a bit harsh to take.

User avatar
CodeJoe

Yes I actually saw that and didn't place too much effort on this competition. That doesn't mean your effort shouldn't be recognized.

Thank you in advance for the write up😅.

User avatar
3B

Congratulations @machine_learning

The lack of input information about the test set makes this competition less meaningful. With a logloss greater than 0.7, the competition model would need significant improvement before it could be considered usable. I suspect that some of the top-ranked teams may have probed the leaderboard using constant prediction. For example, a submission with values such as cocoa = 0.75, oil = 0.125, rubber = 0.125 could achieve a leaderboard score of 0.73. Combining amini FM features, and some postprocessing tricks might help achieve LB score~0.70, but such approaches would violate the rules and lead to disqualification from the final leaderboard.

User avatar
marching_learning
Nostalgic Mathematics

I probed the LB to get those weights. yes it garanties 0.73xx score with just 3 points.