Primary competition visual

Unido AfricaRice App Builder Challenge

Helping Ghana
$5 000 USD
Reveal coming soon!
App Development
Model Deployment
281 joined
59 active
Starti
Feb 18, 26
Closei
Mar 01, 26
Reveali
Mar 13, 26
User avatar
snazon
Clarification: Is the Top 10 based on the App or the Markdown?
25 Feb 2026, 18:10 · 2

Hi Zindi Team,

I’m seeking a bit of clarity on the evaluation process for the App Builder Challenge.

The brief outlines several sophisticated technical requirements—such as full offline stability, real-time image validation, and on-device model inference. It is quite a feat if the current automated script is able to verify these nuanced UI/UX and hardware performance metrics simply by parsing a Markdown file.

This leads to a bit of a logical loop:

  • The rules state that the actual APK and code are only requested from the Top 10 on the leaderboard.
  • However, if the leaderboard is determined by a text-parser grading a .md file, then the "Top 10" will represent the best writers, not necessarily the best developers.

Essentially, a developer could build a flawless, field-ready offline tool but never have their APK reviewed because they didn't "optimize" their Markdown text for the automated script.

Could you clarify if manual reviews, demo videos, or actual app performance carry any weight in reaching the Top 10? It would be helpful to know if we should be focusing our energy on building the requested app or simply polishing our text file.

Best regards,

snazon

Discussion 2 answers
User avatar
meganomaly
Zindi

Hi @snazon

Thank you for raising this - it’s an important clarification.

First, to be very clear: This challenge is not a writing competition. It is an application development challenge.

The transcript is not being evaluated for literary quality - it is being evaluated as structured technical evidence of a working system.

Why Transcripts Are Used

The transcript serves as a structured technical walkthrough, not a creative essay.

The rubric evaluates whether:

  • The required features are demonstrated
  • The workflow is logically consistent
  • Offline behaviour is explicitly described
  • On-device inference is clearly shown
  • Image validation flow is demonstrated
  • Edge cases and constraints are handled
  • The system behaviour is technically plausible

It is not scoring based on writing style or polish. Submissions that are vague, unrealistic, inconsistent, or technically infeasible are penalized - even if they are well written.

Is This Just a Text Parser?

No. The evaluation system uses:

  1. A structured rubric
  2. Mandatory requirement gates
  3. Authenticity and realism checks
  4. Logical consistency checks

If required functionality is missing or unclear, the score is capped - regardless of writing quality.

You cannot “optimize wording” to bypass missing functionality.

To reach the Top 10, a submission must:

  • Demonstrate all mandatory technical requirements
  • Show realistic offline operation
  • Show coherent end-to-end workflow
  • Provide technically consistent outputs
  • Pass plausibility checks

What Happens After Top 10?

Once in the Top 10:

  • The APK is reviewed
  • The codebase is audited
  • The video must match the transcript submitted
  • Offline stability is verified
  • Model inference behaviour is checked
  • Reproducibility is tested

If the real application does not match the transcript, disqualification is possible. We designed this process to:

  • Ensure scalability in evaluation
  • Reward technical rigor
  • Maintain fairness
  • Verify actual implementation at the finalist stage

We appreciate you asking this - transparency matters. Let us know if you’d like further clarification.

26 Feb 2026, 08:55
Upvotes 1

There are a lot of different points here. For now, that's all I can say about how to apply. There are many common words here. For me, all this is new. It takes time to figure everything out. The competition will be extended by 5 days due to the fact that it started five days later?