Hi Zindians,
A participant flagged that a subset of the “exit rating” videos in the dataset had been incorrectly labelled. The mislabeled samples could introduce noise and unfairness in the evaluation.
To keep the challenge fair and consistent for everyone, we’ve made the following updates:
All previous submissions will be automatically rescored using the corrected reference set. No action is required on your part - but you’re welcome to resubmit using the updated sample submission. The rescore is currently running and I’ll update this post when all past submissions have been rescored.
These changes ensure that:
We appreciate your understanding and patience - and we hope the cleaner evaluation will help you build even better models.
If you have any questions, feel free to drop them in the chat.
Happy modelling!
Thanks @meganomaly for the update. This really helps — the exit ratings were quite puzzling, and even visually the videos looked mostly free-flowing. It was hard to anchor them to any clear factors like speed, vehicle count, or timing since the patterns felt almost random compared to the entry clips. Removing them should definitely make the evaluation fairer and more consistent. Looking forward to building improved models with the cleaner setup!
Thanks @meganomaly for making the change. Indeed, the "exit" still come out as free-flowing, despite the labels indicating otherwise.
This is awesome, thanks @meganomaly
Updated!