Primary competition visual

Melio MLOps Machine Translation Challenge

$6 500 USD
Challenge completed ~1 year ago
MLOps
Natural Language Processing
Machine Translation
450 joined
43 active
Starti
Mar 03, 24
Enrolments closei
Aug 19, 24
Closei
Sep 01, 24
Reveali
Sep 17, 24
Can you build and deploy a machine translation solution from Dyula to French?

In the digital era, machine translation (MT) is fundamental for global communication, enhancing everything from instant translations on mobile devices to complex multilingual support in customer service centres. However, the scarcity of training data for low-resource languages like Dyula poses a significant challenge in generalising and adapting MT models to new domains.

In this challenge, we invite you to put your MLOps skills to the test, to develop a machine translation model that balances ML model accuracy with practical deployment considerations like throughput, latency, accuracy, and cost., while also following good reproducibility and documentation practices.

The challenge:

We invite you to craft an MT model capable of translating content from Dyula into French across multiple domains like news articles, daily conversations, spoken dialogue transcripts, and books.

Deploying machine learning models often involves various trade-offs, such as latency, accuracy, throughput and cost. These are under your control as a data scientist or machine learning engineer. For this competition, we have designed a use case below to help you create a boundary condition to balance the tradeoffs.

Anticipated use case:

Your contribution will target a unique use case: AI Student Learning Assistant (AISLA) - a free learning assistant that helps students converse and learn in their native language.

AISLA can support students with creating study plans, summarising course materials, and conducting question-and-answer sessions for exam preparations. AISLA can read and respond in either French or Dyula, enhancing educational accessibility for Dyula speakers by providing reliable French-to-Dyula translations. Your model will serve as a potential backbone for AISLA on Discord.

Note that AISLA is included to provide context and boundary conditions of typical real-world use cases, but is not part of the competition evaluation. You are only required to submit the machine translation model.

However, it is important to understand that your solution must balance these key aspects:

  • Accuracy: delivering translations with high semantic fidelity.
  • Latency: providing immediate results to support real-time learning interactions.
  • Cost efficiency: ensuring that operational costs are minimised without compromising quality, making the most of available resources.

Use the provided evaluation criteria rubric to design your systems and effectively manage these trade-offs for optimal performance.

Organisations

About Melio AI (melio.ai)

Melio is an AI consulting firm specialising in developing and deploying machine learning solutions. We help our customers navigate the complex MLOps landscape and ensure seamless Day 2 Operations for AI systems. Melio’s mission is to Make AI Frictionless for everyone, enabling businesses and individuals to leverage AI effortlessly.

As part of this journey, Highwind, a marketplace to deliver AI-as-a-Service, is launched. The platform aims to connect small businesses that require AI services to find high-quality AI solutions on the Highwind marketplace. We aim to connect Africa’s AI talent to solve the toughest problems on our continent while exposing them on the global stage.

About data354 (data354.com)

data354 is a consulting firm specializing in data analytics and AI solutions, a leader in its field in French-speaking Africa with an international presence. Its mission is to unleash the power of data through data-centric digitalization strategies and develop AI solutions associated with a deep understanding of local realities.

Data354 offers end-to-end support: Data & AI strategy consulting, Implementation of data, infrastructure and platforms, Development of intelligent solutions.

About Université Virtuelle de Côte d’Ivoire (uvci.edu.ci)

Université Virtuelle de Côte d’Ivoire (UVCI) is a public and pure online (virtual) university headquartered in Abidjan with students all over the country as well as in the west African region. UVCI is the national leader of digital transformation in education with teachings mainly focused on digital sciences encompassing computer science, data science, Artificial Intelligence (AI), digital marketing, digital communication, … etc. Not only this institute conducts diverse research projects in the field of AI, of which this competition dataset is an output but in e-Health, Agriculture precision, geo-intelligence as well.

Submission

To participate in this competition, you will need to:

Deploy the Docker Repo Asset as a use case on Highwind

  • You can see how to do this here.

Submit a .zip file on Zindi, including:

  1. Dockerfile
  2. README.md
  3. main.py
  4. image_name.txt
  5. Python environment files, such as pyproject.toml or requirements.txt

An example of the repository looks like this: https://github.com/highwind-ai/examples

Please note you may have to wait a few minutes or up to a few hours to see the leaderboard score change.

Important:

  • Each team can only submit a total of 10 times for testing.
  • As this is an MLOps competition, we expect you to follow good experiment tracking methodologies. The final submission will be the submission you are scored on, so please remember to save your best submission to resubmit as your final submission, to make sure that you put your best foot forward.

Evaluation

This is an MLOps challenge, and will work differently to a normal Zindi competition.

A total of eight areas will be evaluated.

To understand more about the evaluation details, visit: https://github.com/highwind-ai/examples

Evaluation Area  Evaluation Criteria  Description                    Weighting Leaderboard
Accuracy         BLEU                 This metric evaluates the       45%      Public (validation set)
                                      translation performance of               Private (test set)
                                      the model. 
                                      Higher is better.        
Latency          Inference Latency    This metric evaluates the       15%      Public (validation set)
                                      model’s translation speed by             Private (test set)  
                                      measuring the average time                    
                                      it takes, in milliseconds, 
                                      to translate a single sentence
                                      during inference (using a 
                                      predefined set of sentences). 
                                      This is provided by KServe. 
                                      Lower is better.     
Cost             CPU Usage            This metric measures the         10%     Public (validation set)
                                      resource efficiency of your              Private (test set)
                                      model by calculating              
                                      the average CPU usage 
                                      during inference (in %). 
                                      Lower is better. 
                                      This is estimated via 
                                      Prometheus.
Cost             Memory Usage         This metric measures the         10%     Public (validation set)
                                      resource efficiency of your              Private (test set)            
                                      model bycalculating the peak
                                      memory usage during 
                                      inference (in %). 
                                      Lower is better. 
                                      This is estimated via 
                                      Prometheus.
Cost             Inference Docker     This metric measures the         10%     Public & Private will
                 Image Size           resource efficiency of your              have the same score    
                                      model by measuring the                         
                                      size of your inference 
                                      Docker image (in GB). 
                                      Lower is better.
Solution         Code Quality         This metric assesses your       10%      Public & Private will
Quality                               overall quality by getting               have the same score
                                      a linting score.
                                      Higher is better. 
                                         
Solution         Documentation        This metric measures your        5%      Private
Quality                               ability to document your 
                                      model and code by looking 
                                      for a README.md file 
                                      containing at least Usage 
                                      and Model Card sections.
                                          

You can read more about the evaluation criteria in the document called Evaluation_Details.pdf in the download data section on the data page.

How to get started with Highwind

Introduction to Docker - Containerise your ML Models for Highwind - Part 1

Introduction to KServe - Serve your ML Models - Part 2

Deploy Your Models to Highwind - Part 3

Model Grading and Submitting to Zindi

Timeline

The competition starts on 3 May 2024.

Registration for this competition closes on 19 August 2024 at 23:59 PM GMT.

The challenge closes on 1 September at 23:59 PM GMT.

Final submissions must be submitted to Zindi by 1 September at 23:58 PM GMT to be considered for scoring.

The private leaderboard will be revealed on 12 September 2024 at 23:59 PM GMT.

We reserve the right to update this timeline if necessary.

Prizes

1st place: $3 000 USD

2nd place: $2 000 USD

3rd place: $1 000 USD

Top Female Team: $500 USD (if the top female team is in the top 3 places, this is awarded to the next top female or majority female team)

There are 6 000 Zindi points available. You can read more about Zindi points here.

The top female or majority female team will receive an additional 1 000 Zindi points.

The top 3 winners will have their solutions hosted on HighWind.

Registration

Once you have joined the Zindi competition, you will receive a welcome email from Highwind.

To access Highwind, visit the Highwind login page and select “Forget Password” to initiate a password reset. You will receive an email containing a link to create a new password. After resetting your password, you can use it to log into Highwind.

Follow the guidelines here on how to register on Highwind - docs.highwind.ai/zindi/register/

Please give a few minutes for your credentials to reflect after you enrolled on the competition before you log into Highwind.

How to get started with Zindi

How to enroll in your first Zindi competition

How to create a team on Zindi

How to update your profile on Zindi

How to use Colab on Zindi

How to mount a drive on Colab

Rules

This challenge is open to all.

Teams and collaboration

You may participate in competitions only as individua.

Multiple accounts per user are not permitted, and neither is collaboration or membership across multiple teams. Individuals and their submissions originating from multiple accounts will be immediately disqualified from the platform.

Code must not be shared privately outside of a team. Any code that is shared, must be made available to all competition participants through the platform. (i.e. on the discussion boards).

Datasets and packages

The solution must use publicly-available, open-source packages only.

You may use only the datasets provided for this competition. Automated machine learning tools such as automl are not permitted.

You may use pretrained models as long as they are openly available to everyone.

The data for this competition is under the license CC-BY-SA-4.0.

You can access the dataset here: https://huggingface.co/datasets/data354/Koumankan_mt_dyu_fr

You must notify Zindi immediately upon learning of any unauthorised transmission of or unauthorised access to the competition data, and work with Zindi to rectify any unauthorised transmission or access.

Your solution must not infringe the rights of any third party and you must be legally entitled to assign ownership of all rights of copyright in and to the winning solution code to Zindi and HighWind.

Submissions and winning

You may make a maximum of 10 submissions per day.

You may make a maximum of 75 submissions for this competition.

Before the end of the competition you need to choose 2 submissions to be judged on for the private leaderboard. If you do not make a selection your 2 best public leaderboard submissions will be used to score on the private leaderboard.

During the competition, your best public score will be displayed regardless of the submissions you have selected. When the competition closes your best private score out of the 2 selected submissions will be displayed.

Zindi maintains a public leaderboard and a private leaderboard for each competition. The Public Leaderboard includes approximately 25% of the test dataset. While the competition is open, the Public Leaderboard will rank the submitted solutions by the accuracy score they achieve. Upon close of the competition, the Private Leaderboard, which covers the other 75% of the test dataset, will be made public and will constitute the final ranking for the competition.

Note that to count, your submission must first pass processing. If your submission fails during the processing step, it will not be counted and not receive a score; nor will it count against your daily submission limit. If you encounter problems with your submission file, your best course of action is to ask for advice on the Competition’s discussion forum.

If you are in the top 10 at the time the leaderboard closes, we will email you to request your code. On receipt of email, you will have 48 hours to respond and submit your code following the Reproducibility of submitted code guidelines detailed below. Failure to respond will result in disqualification.

If your solution places 1st, 2nd, or 3rd on the final leaderboard, you will be required to submit your winning solution code to us for verification, and you thereby agree to assign all worldwide rights of copyright in and to such winning solution to Zindi.

If two solutions earn identical scores on the leaderboard, the tiebreaker will be the date and time in which the submission was made (the earlier solution will win).

The winners will be paid via bank transfer, PayPal if payment is less than or equivalent to $100, or other international money transfer platform. International transfer fees will be deducted from the total prize amount, unless the prize money is under $500, in which case the international transfer fees will be covered by Zindi. In all cases, the winners are responsible for any other fees applied by their own bank or other institution for receiving the prize money. All taxes imposed on prizes are the sole responsibility of the winners. The top winners or team leaders will be required to present Zindi with proof of identification, proof of residence and a letter from your bank confirming your banking details. Winners will be paid in USD or the currency of the competition. If your account cannot receive US Dollars or the currency of the competition then your bank will need to provide proof of this and Zindi will try to accommodate this.

Please note that due to the ongoing Russia-Ukraine conflict, we are not currently able to make prize payments to winners located in Russia. We apologise for any inconvenience that may cause, and will handle any issues that arise on a case-by-case basis.

Payment will be made after code review and sealing the leaderboard.

You acknowledge and agree that Zindi may, without any obligation to do so, remove or disqualify an individual, team, or account if Zindi believes that such individual, team, or account is in violation of these rules. Entry into this competition constitutes your acceptance of these official competition rules.

Zindi is committed to providing solutions of value to our clients and partners. To this end, we reserve the right to disqualify your submission on the grounds of usability or value. This includes but is not limited to the use of data leaks or any other practices that we deem to compromise the inherent value of your solution.

Zindi also reserves the right to disqualify you and/or your submissions from any competition if we believe that you violated the rules or violated the spirit of the competition or the platform in any other way. The disqualifications are irrespective of your position on the leaderboard and completely at the discretion of Zindi.

Please refer to the FAQs and Terms of Use for additional rules that may apply to this competition. We reserve the right to update these rules at any time.

Reproducibility of submitted code

  • If your submitted code does not reproduce your score on the leaderboard, we reserve the right to adjust your rank to the score generated by the code you submitted.
  • If your code does not run you will be dropped from the top 10. Please make sure your code runs before submitting your solution.
  • Always set the seed. Rerunning your model should always place you at the same position on the leaderboard. When running your solution, if randomness shifts you down the leaderboard we reserve the right to adjust your rank to the closest score that your submission reproduces.
  • Custom packages in your submission notebook will not be accepted.
  • You may only use tools available to everyone i.e. no paid services or free trials that require a credit card.

Documentation

A README markdown file is required

It should cover:

  • How to set up folders and where each file is saved
  • Order in which to run code
  • Explanations of features used
  • Environment for the code to be run (conda environment.yml file or an environment.txt file)
  • Hardware needed (e.g. Google Colab or the specifications of your local machine)
  • Expected run time for each notebook. This will be useful to the review team for time and resource allocation.

Your code needs to run properly, code reviewers do not have time to debug code. If code does not run easily you will be bumped down the leaderboard.

Consequences of breaking any rules of the competition or submission guidelines:

  • First offence: No prizes for 6 months and 2000 points will be removed from your profile (probation period). If you are caught cheating, all individuals involved in cheating will be disqualified from the challenge(s) you were caught in and you will be disqualified from winning any competitions for the next six months and 2000 points will be removed from your profile. If you have less than 2000 points to your profile your points will be set to 0.
  • Second offence: Banned from the platform. If you are caught for a second time your Zindi account will be disabled and you will be disqualified from winning any competitions or Zindi points using any other account.
  • Teams with individuals who are caught cheating will not be eligible to win prizes or points in the competition in which the cheating occurred, regardless of the individuals’ knowledge of or participation in the offence.
  • Teams with individuals who have previously committed an offence will not be eligible for any prizes for any competitions during the 6-month probation period.

Monitoring of submissions

  • We will review the top 10 solutions of every competition when the competition ends.
  • We reserve the right to request code from any user at any time during a challenge. You will have 24 hours to submit your code following the rules for code review (see above). Zindi reserves the right not to explain our reasons for requesting code. If you do not submit your code within 24 hours you will be disqualified from winning any competitions or Zindi points for the next six months. If you fall under suspicion again and your code is requested and you fail to submit your code within 24 hours, your Zindi account will be disabled and you will be disqualified from winning any competitions or Zindi points with any other account.