Hi everyone
Following the recent discussion and helpful questions and comments from the community, we’d like to share an update and clarification with everyone:
The original question was:
“Given there are 15 targets, can we train 15 specialised models (one per target), or should we train a single model that predicts all 15 targets? We want to be sure we won’t be disqualified during evaluation.”
Updated guidance:
To balance real-world mobile deployment with modelling flexibility, we are allowing solutions that use one or a small number of models (typically up to 3 or 4), where each model predicts a group of related rice quality metrics.
👉 In other words:
Approaches involving a large number of independent models (e.g. one model per target) are not aligned with the spirit or intended real-world use case.
Thank you to everyone who raised this question - it helped us refine the guidance in a way that’s both practical and fair. We aim to host challenges that are both technically interesting and practically usable for clients, and we appreciate the community’s engagement in helping us strike that balance.
Good luck, and happy modelling 🚀
I appreciate the clarification on the model approach in your most recent post! It's fascinating how the quality assessment challenge is delving into rice analysis a critical area for food security. Sprunki Game distinguishes itself in the music game market by providing an innovative and inspiring interactive experience. Players sprunki game can actively shape their music by selecting, arranging, and altering sound parts in their own unique way.
I was tired of clunky virtual instruments, so I built this: webharmonium.cc. It uses Web Audio API for low latency, and even supports MIDI controllers. Great for Raga practice without installing anything.