The size of the image dataset is quite large (1.3gb) and x2 if one is to upload it into a drive or other resources for use, also considering storage capacities. Zindi should consider uploading large datasets into a single public storage system where participants could easily fetch it. Imagine 100+ participants needing to download and upload such size of data.
See this https://github.com/sayedmohamedscu/Zindi_colab
You can directly load data from zindi into colab without spending dat much data... Then copy the file to your drive from colab
# This is something I did a while ago to download Zindi datasets
!git clone "https://github.com/yonas-g/zindi-dataset-downloader.git"
!cp -r zindi-dataset-downloader/* .
!mkdir "./data"
!mkdir "./data/images"
import ZindiDownloader.ZindiDownloader as zindi
auth_token = 'key_here'
downloader = zindi.ZindiDataDownloader(auth_token)
# this can be found by inspecting html element of dataset's webpage. CTRL + f
links = [
{"url": "https://api.zindi.africa/v1/competitions/microsoft-rice-disease-classification-challenge/files/Test.csv", "path": "./data"},
{"url": "https://api.zindi.africa/v1/competitions/microsoft-rice-disease-classification-challenge/files/Images.zip", "path": "./data/images"},
{"url": "https://api.zindi.africa/v1/competitions/microsoft-rice-disease-classification-challenge/files/SampleSubmission.csv", "path": "./data"},
{"url": "https://api.zindi.africa/v1/competitions/microsoft-rice-disease-classification-challenge/files/Train.csv", "path": "./data"}
]
for item in links:
downloader.fetch(item["url"], target_path=item["path"])