I am curious which model works best here. I can see many people on the leaderboard are getting similar scores. Another concern is that the MAE that I am getting on the validation is very low compared to the MAE I am getting on the test. I will appreciate your feedback.
Those similar scores are achieved by submitting all zeros. There is something wrong with scorer and possibly test set too. The online MAE should not be so far off the local MAE. There is also next to no correlation between local validation score improvements compared to the public leaderboard.
I see the same problem, sumitting different results of the model that improves on local validation set doesn't even beat the all 0 submission. You can get results below 0.7 training a knn on the yields, so this doesn't look very good to me.
We are pitted against a RNG ... it makes the challenge more like the real world
I have tried to use a variety of models. The local MAE is low, but when I make submissions, the MAE on the leaderboard is high. I was wondering whether there is a leakage in my validation. Should I trust my local results or should I trust the public leaderboard?
We're not saying.
This is an impossibly difficult problem that you must solve with your own very limited resources and with no support or guidance. The problem keeps on changing and the data is dirty and volatile. The prize is relatively low in comparison and if you look at the leaderboard it will go to Europe anyhow. If you do manage to solve it then the IP goes to the organisers.
This is, I suppose, to simulate conditions in Africa. For our trouble we gain experience with CNNs.
But at least we have faith here in the continent, and that, together with love and hope, overcome all problems.
FWIW I trust my local results.
My best scoring models are more or less random so I've picked the 2 models I think will do well based on local results.
I will also trust my local results. Thank you for that. I thought that I was the only one who was experiencing that problem.