Meet the Winners of the Turtle Conservation Challenge
Tutorial: A Contrastive Learning Approach
Being able to distinguish between individuals of the same species is a critical tool for modern conservation. For example, sea turtle conservation efforts aim to track individual turtles to help reveal patterns of movement and residency. Sea turtles are a known indicator species which means that their presence and abundance reflects the health of the wider ecosystem. Therefore, increasing our ability to identify and understand them can enhance our ecological understanding.
Sea turtles can be identified using their facial scales, which are as unique as a human fingerprint. Traditionally, individual recognition has been achieved manually through the attachment of tags on the flippers of found individuals. However, ecological investigations that require recapturing the individuals to study changes in the population dynamics are severely affected by the loss of the tags (or of the marks on the tags). Moreover, tags are expensive and through the long duration of sea turtle life cycles they can deteriorate and require a replacement.
The aim of this competition is to build a machine learning model to identify individual sea turtles. For each image presented, the model should output the turtle’s unique ID or, if the image corresponds to a new turtle (not present in the database), identify it as a new individual. A dataset of labelled photos of turtle faces and a tutorial including a simple baseline model have been provided.
About DeepMind (deepmind.com)
We’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence (AI). When we started DeepMind in 2010, there was far less interest in the field of AI than there is today. To accelerate the field, we took an interdisciplinary approach along with new ways of organizing scientific endeavour.
We achieved early success in computer games, which researchers often use to test AI. We joined forces with Google in 2014 to accelerate our work, while continuing to set our own research agenda.
We’re a dedicated scientific community, working together to “solve intelligence” and ensure our technology is used for widespread public benefit.
We work closely with Google and other experts to find ways for our advances to benefit society.
So far, our systems have shown how they can save energy, identify eye disease, accelerate science, and improve Google products used across the world.
About Ocean Hub Africa (ocean-innovation.africa)
We believe that innovations led by science, technology, and entrepreneurship will provide the necessary leverage points to address the Sustainable Development Goals (SDG), at the right pace and scale. By addressing the issues of sustainable Oceans economy and Oceans conservation, our event seeks to find solutions linking up to almost all SDGs in addition to SDG14 – Life Below Water.
Our focus stems from a passion and the need for efficient, impactful action: at the current pace, SDGs objectives would only be met by 2094 instead of 2030! Oceans are the heart and lungs of our planet, it is crucial – and now urgent – to support the development and adoption of new sustainable ocean solutions that will shape better interactions between people and the Earth’s largest ecosystem. In the recovery of the recent COVID-19-related crisis, Oceans have also a key role to play by keeping trade moving between nations and providing sustainable energy and healthy food for all.
About Local Ocean Conservation (LOC) (localocean.co):
LOC is a private, not-for-profit organisation committed to the protection of Kenya’s marine environment. LOC supports the communities and coastal areas in Watamu and Diani, Kilifi County with marine conservation and community development projects – centered around a holistic approach to conservation. LOC has been doing marine conservation for over 20 years.
Teams and collaboration
You may participate in competitions as an individual or in a team of up to four people. When creating a team, the team must have a total submission count less than or equal to the maximum allowable submissions as of the formation date. A team will be allowed the maximum number of submissions for the competition, minus the total number of submissions among team members at team formation. Prizes are transferred only to the individual players or to the team leader.
Multiple accounts per user are not permitted, and neither is collaboration or membership across multiple teams. Individuals and their submissions originating from multiple accounts will be immediately disqualified from the platform.
Code must not be shared privately outside of a team. Any code that is shared, must be made available to all competition participants through the platform. (i.e. on the discussion boards).
The Zindi user who sets up a team is the default Team Leader. The Team Leader can invite other data scientists to their team. Invited data scientists can accept or reject invitations. Until a second data scientist accepts an invitation to join a team, the data scientist who initiated a team remains an individual on the leaderboard. No additional members may be added to teams within the final 5 days of the competition or the last hour of a hackathon, unless otherwise stated in the competition rules
A team can be disbanded if it has not yet made a submission. Once a submission is made individual members cannot leave the team.
All members in the team receive points associated with their ranking in the competition and there is no split or division of the points between team members.
Datasets and packages
The solution must use publicly-available, open-source packages only. Your models should not use any of the metadata provided.
You may use only the datasets provided for this competition. Automated machine learning tools such as automl are not permitted.
If the challenge is a computer vision challenge, image metadata (Image size, aspect ratio, pixel count, etc) may not be used in your submission.
You may use pretrained models as long as they are openly available to everyone.
You are allowed to access, use and share competition data for any commercial, non-commercial, research or education purposes, under a CC-BY SA 4.0 license.
You must notify Zindi immediately upon learning of any unauthorised transmission of or unauthorised access to the competition data, and work with Zindi to rectify any unauthorised transmission or access.
Your solution must not infringe the rights of any third party and you must be legally entitled to assign ownership of all rights of copyright in and to the winning solution code to Zindi.
Submissions and winning
You may make a maximum of 10 submissions per day.
You may make a maximum of 300 submissions for this competition.
Before the end of the competition you need to choose 2 submissions to be judged on for the private leaderboard. If you do not make a selection your 2 best public leaderboard submissions will be used to score on the private leaderboard.
Zindi maintains a public leaderboard and a private leaderboard for each competition. The Public Leaderboard includes approximately 30% of the test dataset. While the competition is open, the Public Leaderboard will rank the submitted solutions by the accuracy score they achieve. Upon close of the competition, the Private Leaderboard, which covers the other 70% of the test dataset, will be made public and will constitute the final ranking for the competition.
Note that to count, your submission must first pass processing. If your submission fails during the processing step, it will not be counted and not receive a score; nor will it count against your daily submission limit. If you encounter problems with your submission file, your best course of action is to ask for advice on the Competition’s discussion forum.
If you are in the top 20 at the time the leaderboard closes, we will email you to request your code. On receipt of email, you will have 48 hours to respond and submit your code following the submission guidelines detailed below. Failure to respond will result in disqualification.
If your solution places 1st, 2nd, 3rd or most generalisable on the final leaderboard, you will be required to submit your winning solution code to us for verification, and you thereby agree to assign all worldwide rights of copyright in and to such winning solution to Zindi.
If two solutions earn identical scores on the leaderboard, the tiebreaker will be the date and time in which the submission was made (the earlier solution will win).
If the error metric requires probabilities to be submitted, do not set thresholds (or round your probabilities) to improve your place on the leaderboard. In order to ensure that the client receives the best solution Zindi will need the raw probabilities. This will allow the clients to set thresholds to their own needs.
The winners will be paid via bank transfer, PayPal, or other international money transfer platform. International transfer fees will be deducted from the total prize amount, unless the prize money is under $500, in which case the international transfer fees will be covered by Zindi. In all cases, the winners are responsible for any other fees applied by their own bank or other institution for receiving the prize money. All taxes imposed on prizes are the sole responsibility of the winners. The top 3 winners or team leaders will be required to present Zindi with proof of identification, proof of residence and a letter from your bank confirming your banking details.Winners will be paid in USD or the currency of the competition. If your account cannot receive US Dollars or the currency of the competition then your bank will need to provide proof of this and Zindi will try to accommodate this.
Please note that due to the ongoing Russia-Ukraine conflict, we are not currently able to make prize payments to winners located in Russia. We apologise for any inconvenience that may cause, and will handle any issues that arise on a case-by-case basis.
You acknowledge and agree that Zindi may, without any obligation to do so, remove or disqualify an individual, team, or account if Zindi believes that such individual, team, or account is in violation of these rules. Entry into this competition constitutes your acceptance of these official competition rules.
Zindi is committed to providing solutions of value to our clients and partners. To this end, we reserve the right to disqualify your submission on the grounds of usability or value. This includes but is not limited to the use of data leaks or any other practices that we deem to compromise the inherent value of your solution.
Zindi also reserves the right to disqualify you and/or your submissions from any competition if we believe that you violated the rules or violated the spirit of the competition or the platform in any other way. The disqualifications are irrespective of your position on the leaderboard and completely at the discretion of Zindi.
Please refer to the FAQs and Terms of Use for additional rules that may apply to this competition. We reserve the right to update these rules at any time.
Reproducibility of submitted code
Data standards:
Consequences of breaking any rules of the competition or submission guidelines:
Monitoring of submissions
This challenge has an associated leaderboard where submissions are ranked using top 5 mean average precision (mAP) as the metric. Turtles that are not predicted to be one of the labels in the training data should be labeled as “new_turtle”. E.g. your submission file could look something like this:
image_id prediction1 prediction2 prediction3 prediction4 prediction5
ID_6NEDKOYZ new_turtle t_id_d6aYXtor t_id_qZ0iZYsC new_turtle t_id_d6aYXtor
See the tutorial notebook for further information and examples.
Submissions will be evaluated within 15 working days of the close of the competition by a panel of judges on the following criteria:
Placement on leaderboard:
The likelihood of the approach and algorithm being able to generalise without frequent re-training:
Researchers from DeepMind will examine the top 10 entries on the leaderboard, based on a review of the corresponding code, taking into account the approach and algorithm used, to determine how likely it would be to generalise beyond the challenge dataset.
Competition closes on 21 April 2022.
Final submissions must be received by 11:59 PM GMT.
We reserve the right to update the contest timeline if necessary.