Primary competition visual

CGIAR Crop Yield Prediction Challenge

Helping Kenya
$3 000 USD
Challenge completed over 4 years ago
Prediction
Earth Observation
875 joined
195 active
Starti
Oct 21, 20
Closei
Feb 07, 21
Reveali
Feb 07, 21
Can you predict maize yields on East African farms using satellite data?

Meet Team a_MAIZE_ing, winners of the CGIAR Crop Yield Prediction Challenge

Crop yield data is arguably the most important measure of agricultural productivity and production, and is used to monitor global and national food security, and to help determine effective agricultural policy.

Due to the physical challenges and high costs associated with the collection of crop-cut yield estimates, few datasets exist and even fewer are regularly sampled every season. For this reason, developing new methods to estimate crop yields at scale using the limited data available has been a prominent research priority.

One of the most promising yield estimation methods has been to use available crop-cut datasets to calibrate mathematical models to estimate crop yields from satellite imagery.

The aim of this challenge is to create a model capable of estimating the crop-cut maize yield for fields in East Africa. Given a time-series of Sentinel 2 imagery and climate variables, your model must output a predicted yield in tons per acre.

These models often need to be applied at scale, so large ensembles are discouraged. To incentivise more lightweight solutions, we are adding an additional submission criteria: your submission should take a reasonable time to train and run inference. Specifically, we should be able to re-create your submission on a single-GPU machine (eg Nvidia P100) with less than 8 hours training and two hours inference.

About CGIAR (cgiar.org):

The CGIAR (formerly the Consultative Group for International Agricultural Research) is a consortium of international agricultural research centers scattered across the world who focus on issues related to agricultural productivity, food security, poverty, and the environment. The CGIAR is made up of 15 research centers and operates in dozens of countries across Asia, Africa, and Latin America.

About The Platform for Big Data in Agriculture (bigdata.cgiar.org)

The CGIAR Platform for Big Data in Agriculture is a cross-center platform of the CGIAR with the goal of leveraging and harnessing the power of big data to accelerate and enhance the impact of international agricultural research. This 5-year platform (2017 - 2021) will provide global leadership in organizing open data, convening partners to develop innovative ideas, and demonstrating the power of big data analytics through inspiring projects. It is where information becomes power: power to predict, prescribe, and produce more food, more sustainably. It democratizes decades of agricultural data empowering analysts, statisticians, programmers and more to mine information for trends and quirks, and develop rapid, accurate and compelling recommendations for farmers, researchers and policymakers.

About The Big Data in Agriculture Convention (bigdata.cgiar.org/virtual-convention-2020)

This year’s Convention will be held online from 19 - 23 October, with the theme of digital dynamism for adaptive food systems.

This convention is the third annual event to bring together the people and organizations that make the Big Data Platform successful, a mix of scientists, policy makers, entrepreneurs, technologists, and agriculturalists. Three days of focused, action-based discussion to find commonalities across research institutes, governments, and private organizations sets the stage for a productive and data-driven year ahead!

About The Alliance of Bioversity and the International Center for Tropical Agriculture (CIAT) (bioversityinternational.org/alliance)

CIAT (International Center for Tropical Agriculture) works in collaboration with hundreds of partners to help developing countries make farming more competitive, profitable, and resilient through smarter, more sustainable natural resource management. We help policymakers, scientists, and farmers respond to some of the most pressing challenges of our time, including food insecurity and malnutrition, climate change, and environmental degradation.

About The International Food Policy Research Institute (IFPRI) (ifpri.org)

The International Food Policy Research Institute (IFPRI) provides research-based policy solutions to sustainably reduce poverty and end hunger and malnutrition in developing countries. Established in 1975, IFPRI currently has more than 600 employees working in over 50 countries. It is a research center of CGIAR, a worldwide partnership engaged in agricultural research for development.

About Amazon Web Services (https://aws.amazon.com)

Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform, offering over 175 fully featured services from data centers globally, including Cape Town, South Africa. Millions of customers, including academic and research institutions like CGIAR, are using AWS to lower costs, become more agile, and innovate faster.

Rules

This challenge is open to all and not restricted to any country.

Teams and collaboration

You may participate in this competition as an individual or in a team of up to four people. When creating a team, the team must have a total submission count less than or equal to the maximum allowable submissions as of the formation date. A team will be allowed the maximum number of submissions for the competition, minus the highest number of submissions among team members at team formation. Prizes are transferred only to the individual players or to the team leader.

Multiple accounts per user are not permitted, and neither is collaboration or membership across multiple teams. Individuals and their submissions originating from multiple accounts will be disqualified.

Code must not be shared privately outside of a team. Any code that is shared, must be made available to all competition participants through the platform. (i.e. on the discussion boards).

Datasets and packages

The solution must use publicly-available, open-source packages only. Your models should not use any of the metadata provided.

You may use only the datasets provided for this competition. Automated machine learning tools such as automl are not permitted.

If the challenge is a computer vision challenge, image metadata (Image size, aspect ratio, pixel count, etc) may not be used in your submission.

If external data is allowed you may only use data that is freely available to everyone. You must send it to Zindi to confirm that it is allowed to be used and then it will appear on the data page under additional data.

You may use pretrained models as long as they are openly available to everyone.

The data used in this competition is the sole property of Zindi and the competition host. You may not transmit, duplicate, publish, redistribute or otherwise provide or make available any competition data to any party not participating in the Competition (this includes uploading the data to any public site such as Kaggle or GitHub). You may upload, store and work with the data on any cloud platform such as Google Colab, AWS or similar, as long as 1) the data remains private and 2) doing so does not contravene Zindi’s rules of use.

You must notify Zindi immediately upon learning of any unauthorised transmission of or unauthorised access to the competition data, and work with Zindi to rectify any unauthorised transmission or access.

Your solution must not infringe the rights of any third party and you must be legally entitled to assign ownership of all rights of copyright in and to the winning solution code to Zindi.

Submissions and winning

You may make a maximum of 10 submissions per day. Your highest-scoring solution on the private leaderboard at the end of the competition will be the one by which you are judged.

These models often need to be applied at scale, so large ensembles aren’t encouraged. To incentivise more lightweight solutions, we are adding an additional submission criteria: your submission should take a reasonable time to train and run inference. Specifically, we should be able to re-create your submission on a single-GPU machine (eg Nvidia P100) with less than 8 hours training and two hours inference.

Zindi maintains a public leaderboard and a private leaderboard for each competition. The Public Leaderboard includes approximately 50% of the test dataset. While the competition is open, the Public Leaderboard will rank the submitted solutions by the accuracy score they achieve. Upon close of the competition, the Private Leaderboard, which covers the other 50% of the test dataset, will be made public and will constitute the final ranking for the competition.

If you are in the top 20 at the time the leaderboard closes, we will email you to request your code. On receipt of email, you will have 48 hours to respond and submit your code following the submission guidelines detailed below. Failure to respond will result in disqualification.

If your solution places 1st, 2nd, or 3rd on the final leaderboard, you will be required to submit your winning solution code to us for verification, and you thereby agree to assign all worldwide rights of copyright in and to such winning solution to Zindi.

If two solutions earn identical scores on the leaderboard, the tiebreaker will be the date and time in which the submission was made (the earlier solution will win).

The winners will be paid via bank transfer, PayPal, or other international money transfer platform. International transfer fees will be deducted from the total prize amount, unless the prize money is under $500, in which case the international transfer fees will be covered by Zindi. In all cases, the winners are responsible for any other fees applied by their own bank or other institution for receiving the prize money. All taxes imposed on prizes are the sole responsibility of the winners.

You acknowledge and agree that Zindi may, without any obligation to do so, remove or disqualify an individual, team, or account if Zindi believes that such individual, team, or account is in violation of these rules. Entry into this competition constitutes your acceptance of these official competition rules.

Zindi is committed to providing solutions of value to our clients and partners. To this end, we reserve the right to disqualify your submission on the grounds of usability or value. This includes but is not limited to the use of data leaks or any other practices that we deem to compromise the inherent value of your solution.

These models often need to be applied at scale, so large ensembles aren’t encouraged. Your submission should take a reasonable time to train and run inference. Specifically, we should be able to re-create your submission on a single-GPU machine (eg Nvidia P100) with less than 8 hours training and two hours inference.

Zindi also reserves the right to disqualify you and/or your submissions from any competition if we believe that you violated the rules or violated the spirit of the competition or the platform in any other way. The disqualifications are irrespective of your position on the leaderboard and completely at the discretion of Zindi.

Please refer to the FAQs and Terms of Use for additional rules that may apply to this competition. We reserve the right to update these rules at any time.

Reproducibility

  • If your submitted code does not reproduce your score on the leaderboard, we reserve the right to adjust your rank to the score generated by the code you submitted.
  • If your code does not run you will be dropped from the top 10. Please make sure your code runs before submitting your solution.
  • Always set the seed. Rerunning your model should always place you at the same position on the leaderboard. When running your solution, if randomness shifts you down the leaderboard we reserve the right to adjust your rank to the closest score that your submission reproduces.
  • We expect full documentation. This includes:
  • All data used
  • Output data and where they are stored
  • Explanation of features used
  • Your solution must include the original data provided by Zindi and validated external data (no processed data)
  • All editing of data must be done in a notebook (i.e. not manually in Excel)

Data standards:

  • Your submitted code must run on the original train, test, and other datasets provided.
  • If external data is allowed it must not exceed 1 GB. External data must be freely and publicly available, including pre-trained models with standard libraries. If external data is allowed, any data used should be shared on the discussion forum.
  • Packages:
  • You must use the most recent versions of packages. Custom packages in your submission notebook will not be accepted.
  • You may only use tools available to everyone i.e. no paid services or free trials that require a credit card.

Consequences of breaking any rules of the competition or submission guidelines:

  • First offence: No prizes or points for 6 months. If you are caught cheating all individuals involved in cheating will be disqualified from the challenge(s) you were caught in and you will be disqualified from winning any competitions or Zindi points for the next six months.
  • Second offence: Banned from the platform. If you are caught for a second time your Zindi account will be disabled and you will be disqualified from winning any competitions or Zindi points using any other account.

Monitoring of submissions

  • We will review the top 20 solutions of every competition when the competition ends.
  • We reserve the right to request code from any user at any time during a challenge. You will have 24 hours to submit your code following the rules for code review (see above).
  • If you do not submit your code within 24 hours you will be disqualified from winning any competitions or Zindi points for the next six months. If you fall under suspicion again and your code is requested and you fail to submit your code within 24 hours, your Zindi account will be disabled and you will be disqualified from winning any competitions or Zindi points.
Evaluation

The error metric for this competition is the Root Mean Squared Error.

For every row in the dataset, submission files should contain 2 columns: Field_ID and Yield

Your submission file should look like this (numbers to show format only):

Field_ID   Yield
E9UZCEA     1.7
1WGGS1Q     4.4

EG2KXE2 ... These models often need to be applied at scale, so large ensembles aren’t encouraged. To incentivise more lightweight solutions, we are adding an additional submission criteria: your submission should take a reasonable time to train and run inference. Specifically, we should be able to re-create your submission on a single-GPU machine (eg Nvidia P100) with less than 8 hours training and two hours inference.

Prizes

There will be two winners from two pools or participants. The first pool is for data scientists and researchers that are from the CGIAR network and the second is for anyone else on Zindi that is not part of the CGIAR network!

CGIAR

  • 1st Place: $1 500 USD in AWS Credit

Outside of CGIAR

  • 1st Place: $1 500 USD (Cash)
Timeline

Competition closes on 7 February 2021.

Final submissions must be received by 11:59 PM GMT.

We reserve the right to update the contest timeline if necessary.