Dear Zindians
Our team took into consideration your concerns following the reveal of the private leaderboard, and after reviewing the scoring process, we found that an outdated version of the error metric was used to calculate the private leaderboard scores. Please find the leaderboard now updated with the correct scores and rankings.
Please note that the only existing issue was the version of the scoring script used on the private leaderboard. All scores were obtained after a thorough review of each submission made and any submission that did not make it to the final leaderboard was a result of it having an infeasible strategy in the long term (the private leaderboard).
We apologise for any confusion that may have occurred and appreciate your patience with the irregularities around this competition. We are looking forward to reviewing winning solutions in the coming days.
The Zindi Team
WHAT ??? THIS IS REALLY UNFAIR. So it means that all the time we have been scored with a wrong evaluation function for it to change after the competition. That's why we have asked about the scoring script, and it is changed without informing us.
Yesterday, the leaderboard was updated two times with big different outcomes. To me fairness means that the competition is prolounged for maybe two or three days with everyone knowing the exact scoring rule. But.......
Honestly I am sad to see that. We have been competing in it for three months for this at the end. I FEEL THAT MY TIME WAS STOLEN. I WILL TAKE A BREAK. I am considering playing elsewhere.
This is completely unfair. I am confused; was there a different scoring script given? If everyone were optimizing a different scoring function, how could there be a good solution for the problem? I mean, do the models add any value to the clients?
I really don't understand....
You mean you use another version of the error metric after the competition is finished? And we still don't know what the error metric is.
It is not fair; the Zindi team has to extend the competition for 3 days.
- Bugged ladder for one-two weeks at the start of the contest.
- Unclear rules, faulty metrics, not really communicative organizers.
- Deleted all my passed contributions from before September 25
- Changes the rules 3 days before then end. No notice, no postponement.
- Changes the validator function 3 days after then end. No remake.
Great site, this was my first contest and I can't wait for the next one!
I can still submit now and get the same score as before. So where is this new error metric?
Yeah I retried. I get same score but both in public & private. It is strange. @Neo_Intelligence, did you get same score in both public and private.
Yes
What a shit show. Should we believe that now everything is fixed?
In another contest ( Microsoft Learn Location Mention Recognition Challenge ) there's a data leak that corrupted the leaderboard so it's completely useless now.
Being at the end of such bugs is hugely frustrating.
Yesterday I was ranked #1. I spend the whole night writing the documents. Today is my birthday, and I received this 'big' gift. And I still don't know if I should submit my code and documents.
Happy birthday!
Wow, yeah that's so bad. Really sorry for you.
they have played on our time; last year I faced the same too, I don't think they have fixed thier error metric i think they have changed the data distribution
I really feel for you. Hbday @Neo_Intelligence
which competition @Yisakberhanu
probably this one: https://zindi.africa/competitions/aiml-for-5g-energy-consumption-modelling/discussions/18990
Happy birthday, Neo! You are the One.
Thank you! @EricSims
Thank you!@marching_learning
Thank you! @tfriedel
Can the organizers just reveal the data used? That is, the consumption and solar used for scoring? Then we could at least analyze and understand why some solutions were unfeasible. I can not understand why this is not the norm, the competition ended, now allow us to understand what possible mistakes we made so that in the future we can do better. @ZINDI
i think they start score after 4th day for priavte if work on private then sum up public score and private score separately. That is, if you say diesel is false when the battery is fully charged, it will not work if the battery charge begins charging as new for private data, which runs from the 5th to the 7th day, so the majority of our submission was calculated for the entire week; it does not work.
I have made a scoring script to subdivide submissions into sequences of 30% from the beginning to the end. This way, it gives me intervals of my score. All my unfeasible submissions were under the feasible score
Please can you share the test data you used? I don't care about the score and leaderboard anymore. I just want to see the difference of the test data and train data.
@nicolapiovesan, @ZINDI, @meganomaly
If you cannot explain how you scored the solution just provide the test data as asked a few times already. We will find out what was going on and maybe point out more issues.
You already ruined the competition for many, communication was lacking, please, in the end, show respect to people who put time and effort into it and be transparent.
Dear Zindians,
Thank you for your passionate engagement in the competition and for sharing your concerns. We understand that the recent leaderboard update, due to an issue with the private leaderboard scoring, has caused frustration, and we sincerely apologise for the confusion.
To clarify, the error affected the private leaderboard, where an outdated version of the error metric was used. This has since been corrected, and the final rankings now reflect accurate scores based on the function applied over the full evaluation period of 7 days.
We understand that many of you have spent months refining your solutions, and it is understandably disheartening to face changes at the competition’s conclusion. However, please be assured that the scoring process has been reviewed in detail to ensure fairness for all competitors. Every submission was evaluated, and no further changes will be made to the leaderboard.
To address some of the specific points raised:
We apologise once again for the inconvenience and appreciate your understanding as we work to uphold a fair and rigorous competition process. Your feedback is incredibly valuable to us, and we are committed to improving our communication and transparency moving forward.
Thank you for your continued participation, and we look forward to celebrating your hard work as we review the winning solutions.
Best regards, The Zindi Team
Well, I'm going back Kaggling.
Thanks for the update. What is the reason that the test data can not be shared? The completion has ended. No further changes to the leaderboard. I am fine with that. I would like to understand which of my assumptions were wrong.
Please share the test data.
Why so many blunders of late @ZINDI? Even the validation page leaderboard is inverted and scores high as best
What does mean public score 0 and private score 0? does the metric fail again @ZINDI?
Some of my submissions show 0 -0