ANNOUNCEMENT: Important updates to Zindi’s rules and submission guidelines
published 3 Mar 2020, 12:59
edited 21 minutes later

As Zindi grows into a global data science competition platform, it’s important that we regularly revisit the systems and rules we have in place, to make sure Zindi remains your favourite place to solve machine learning challenges.

Based on recent feedback and activity on the platform, we are updating our submissions policy and competition rules. You can read our new rules and competition guidelines at the end of this post.

We are also making longer-term changes to the platform to limit the incidence of cheating and improve the experience the vast majority of you have on the platform. These changes will serve to streamline and professionalise our code review process, eliminate data leaks, and reduce the frequency and impact of cheating on the platform.

New submission guidelines

Effective immediately, we will be implementing new stricter rules for code submissions. In addition to systematically requesting code from the top 3 submissions, we will now reserve the right to request code from the top 20 users/teams on each leaderboard at the close of each competition. Your code submission must follow the guidelines below:

  • Your submission must run on the original datasets provided on the competition page
  • Your submission must use the most up-to-date packages and you may not use custom packages
  • Your code must be submitted within competition deadlines (usually 48 hours after close of challenge)
  • Your code must stick to good coding practice and you must provide full documentation

If submission code does not reproduce your score on the leaderboard, we will drop your rank to your closest submission to the solution actually generated by your code.

If your code does not run you will be removed from the leaderboard. Please make sure your code runs before submitting your solution.

Please take a moment to review the full submission guidelines below.

Consequences for cheating

If an individual or a team is caught cheating, all team members:

  • will receive a formal warning
  • will be disqualified from the challenge(s) they are currently involved in
  • will be disqualified from winning points or prizes for six months

Anyone caught cheating a second time will:

  • have their account disabled and be permanently banned from the platform
  • be banned from creating a new Zindi account

Steps to limit data leaks

We have taken steps to put a more rigorous review process in place for all competitions, but we acknowledge that it is not always feasible or possible for us to determine potential data leaks before the start of a competition.

If a leak is found, the leak will be made public. You can see how to report a leak below. In order to preserve the value of the competition and ensure the final solutions are both valid and useful, Zindi reserves the right to update the train and test set, close or relaunch a challenge in response to a data leak.

How to report suspected cheating or a data leak

If you notice suspicious activity, or find a potential data leak, we urge you to report it immediately. You can report suspected cheating or a data leak by sending an email to zindi@zindi.africa.

If you suspect another user or a team of cheating or otherwise abusing the platform, kindly provide any supporting evidence you can. Our team will then review this and take the necessary steps to validate and act on the claims made.

Other platform changes

We will be reviewing suspicious behaviour during the course of our competitions and may request to see code at any time during a competition. Should we request your code during a competition, you will have 24 hours to submit your code for review, or face disqualification.

Finally, in order to aid transparency, we will be adding the number of submissions for each user/team to the competition leaderboards.

The integrity of the Zindi platform is of utmost importance to us, as it is to you. We are committed to doing everything we can to make sure that legitimate efforts on Zindi are recognised and rewarded fairly while unethical behaviour is sanctioned and strongly discouraged. We’d like to thank everyone on the platform for complying with these rules and making Zindi a safe and welcoming competitive space for data science.

Read the full rules in the Zindi blog post here.

This is a good move.Quick question, Are you going to extend the award to top 20.Currently it's only top 10 who receive awards(cash prize /Zindi points)

edited 1 minute later

Thank you Zindi for listening to the users' feedback, and for updating the rules in a way that will make it harder for users to abuse old rules for personal benefit. This will not only make your usual competitors complain less ( yes i complain a lot ), but will make you retain newcomers when they see action is being taken by YOU to give all of us a better competing experience.

Also does the code reviewing apply to Zindi Points challenges or only money prizes ones?

Suggestion 1 :

It would be more discouraging for users to privately share if the logged behavior of a competitor is reviewed against other users' behaviors either by a handpicked number of users to ensure no one has been submitting similar scores with other users on the same time, or by an employee of yours. First option is better, you can , after each competition, let users sign up to receive a csv file containing all information about submissions by all users in a competition. If something is caught, it can be reviewed by this group and voted on.

Suggestion2 :

Since many competitions require someone to be African or a woman, I think it should be encouraged by Zindi for everyone to identify themselves by country and gender from the start to avoid falsification and to better track a person's origin.

Suggestion3 :

Submissions by former teammates around the same time and around the same score are highly suspicious. Please make it clear that sometimes, if you are considering making a team and you haven't decided it, do not privately share ( i speak from personal experience ). This type of behavior is suspicious, and it would be reasonable to take actions against it in other platforms.

Thank you.

Great annoucement ! We are getting close to the fun of competition. I even thought that ML competitions were just for cheating. But with this annoucement you are winning some competitors back.. :)

Your submission must use the most up-to-date packages : I think this rule is useless, we can juste send notebook with package.__version__ . with a new version of some packages with some packages there are bugs, so ...