Currently I have seen a change in the evaluation metrics for ABSA competition https://www.linkedin.com/posts/zindi-africa_absacustomerincomepredictionchallenge-datascience-activity-7048961736598503424-tL9a/?utm_source=share&utm_medium=member_desktop ?
Will it be always the case?
We would like to know how we should spend our efforts. Create a better model or even a big ensemble or a better good looking code or something else ?
The only requirements for this challenge is that your code runs, reproduces the same score as your leaderboard score and that you submit full documentation. There is an example of documentation in the data download tab.
Yes, this is what we expect. We also expect you, Zindi, to uphold this. Otherwise it degenerates in an arbitrary handout with no bearing on the performance of the model.
Why don't you just simply keep to this arrangement?