I was wondering what steps have been taken for reproducibility - some participants in this competition may have large GPU clusters allowing them to experiment more than others and this providing an unfair advantage to those people.
Kaggle counters this (somewhat) by making submissions compulsory in Kaggle notebooks. But there is no such mechanism in Zindi. Can anyone please tell me what steps Zindi is taking for this competition?
The dataset is tiny and using a cluster to train is overkill. Even a basic GPU would crunch it up in seconds
Agreed, but a person with a cluster could tune more hyperparameters, along with more different models and their specific flavours.