Hi everyone, I’d like to raise a concern about the evaluation process. Selecting only 10 or 20 transcripts out of over 200 submissions, without reviewing the actual apps from the remaining participants, feels fundamentally flawed. In an app building competition, the primary artifact should be the app itself: its functionality, usability, implementation quality, and innovation. By filtering strictly through transcripts first, there is a risk that strong technical solutions may never be seen simply because their documentation was not among the initial 10 selected. That does not necessarily reflect the true quality or impact of the app. I understand logistical constraints, but perhaps a lightweight preliminary app review, even automated checks, demos, or short walkthrough validations, before narrowing to 10 would create a fairer assessment process. Could the organizers clarify: - What objective criteria are used to shortlist transcripts? - Whether any app level validation happens before the top 10 selection? - How they are ensuring strong builds are not excluded early? Many of us invested serious effort into building functional solutions, and it would be reassuring to understand how that work is being evaluated. Thank you.
They should definitely request us demo videos of the app to make this competition fair. There are just 54 individuals who made submissions & if the app is really be used in Ghana then organizers should only pick the best implementations covering all functional & non-functional requirements. Filter the best ones and investigate the code usability, solution viability, and most importantly the models accuracy on their test sets.
I have not competed in any Zindi competition before, but I infer that these are public leaderboards, your transcripts are yet to be evaluated on private leaderboards (no idea what that would be.) so the top 10 might change. Good luck to you!. Anyway this is really unfair for an app challenge.