In the digital era, machine translation (MT) is fundamental for global communication, enhancing everything from instant translations on mobile devices to complex multilingual support in customer service centres. However, the scarcity of training data for low-resource languages like Dyula poses a significant challenge in generalising and adapting MT models to new domains.
In this challenge, we invite you to put your MLOps skills to the test, to develop a machine translation model that balances ML model accuracy with practical deployment considerations like throughput, latency, accuracy, and cost., while also following good reproducibility and documentation practices.
The challenge:
We invite you to craft an MT model capable of translating content from Dyula into French across multiple domains like news articles, daily conversations, spoken dialogue transcripts, and books.
Deploying machine learning models often involves various trade-offs, such as latency, accuracy, throughput and cost. These are under your control as a data scientist or machine learning engineer. For this competition, we have designed a use case below to help you create a boundary condition to balance the tradeoffs.
Anticipated use case:
Your contribution will target a unique use case: AI Student Learning Assistant (AISLA) - a free learning assistant that helps students converse and learn in their native language.
AISLA can support students with creating study plans, summarising course materials, and conducting question-and-answer sessions for exam preparations. AISLA can read and respond in either French or Dyula, enhancing educational accessibility for Dyula speakers by providing reliable French-to-Dyula translations. Your model will serve as a potential backbone for AISLA on Discord.
Note that AISLA is included to provide context and boundary conditions of typical real-world use cases, but is not part of the competition evaluation. You are only required to submit the machine translation model.
However, it is important to understand that your solution must balance these key aspects:
Use the provided evaluation criteria rubric to design your systems and effectively manage these trade-offs for optimal performance.
About Melio AI (melio.ai)
Melio is an AI consulting firm specialising in developing and deploying machine learning solutions. We help our customers navigate the complex MLOps landscape and ensure seamless Day 2 Operations for AI systems. Melio’s mission is to Make AI Frictionless for everyone, enabling businesses and individuals to leverage AI effortlessly.
As part of this journey, Highwind, a marketplace to deliver AI-as-a-Service, is launched. The platform aims to connect small businesses that require AI services to find high-quality AI solutions on the Highwind marketplace. We aim to connect Africa’s AI talent to solve the toughest problems on our continent while exposing them on the global stage.
About data354 (data354.com)
data354 is a consulting firm specializing in data analytics and AI solutions, a leader in its field in French-speaking Africa with an international presence. Its mission is to unleash the power of data through data-centric digitalization strategies and develop AI solutions associated with a deep understanding of local realities.
Data354 offers end-to-end support: Data & AI strategy consulting, Implementation of data, infrastructure and platforms, Development of intelligent solutions.
About Université Virtuelle de Côte d’Ivoire (uvci.edu.ci)
Université Virtuelle de Côte d’Ivoire (UVCI) is a public and pure online (virtual) university headquartered in Abidjan with students all over the country as well as in the west African region. UVCI is the national leader of digital transformation in education with teachings mainly focused on digital sciences encompassing computer science, data science, Artificial Intelligence (AI), digital marketing, digital communication, … etc. Not only this institute conducts diverse research projects in the field of AI, of which this competition dataset is an output but in e-Health, Agriculture precision, geo-intelligence as well.
To participate in this competition, you will need to:
Deploy the Docker Repo Asset as a use case on Highwind
Submit a .zip file on Zindi, including:
An example of the repository looks like this: https://github.com/highwind-ai/examples
Please note you may have to wait a few minutes or up to a few hours to see the leaderboard score change.
Important:
This is an MLOps challenge, and will work differently to a normal Zindi competition.
A total of eight areas will be evaluated.
To understand more about the evaluation details, visit: https://github.com/highwind-ai/examples
Evaluation Area Evaluation Criteria Description Weighting Leaderboard
Accuracy BLEU This metric evaluates the 45% Public (validation set)
translation performance of Private (test set)
the model.
Higher is better.
Latency Inference Latency This metric evaluates the 15% Public (validation set)
model’s translation speed by Private (test set)
measuring the average time
it takes, in milliseconds,
to translate a single sentence
during inference (using a
predefined set of sentences).
This is provided by KServe.
Lower is better.
Cost CPU Usage This metric measures the 10% Public (validation set)
resource efficiency of your Private (test set)
model by calculating
the average CPU usage
during inference (in %).
Lower is better.
This is estimated via
Prometheus.
Cost Memory Usage This metric measures the 10% Public (validation set)
resource efficiency of your Private (test set)
model bycalculating the peak
memory usage during
inference (in %).
Lower is better.
This is estimated via
Prometheus.
Cost Inference Docker This metric measures the 10% Public & Private will
Image Size resource efficiency of your have the same score
model by measuring the
size of your inference
Docker image (in GB).
Lower is better.
Solution Code Quality This metric assesses your 10% Public & Private will Quality overall quality by getting have the same score a linting score. Higher is better. Solution Documentation This metric measures your 5% Private Quality ability to document your model and code by looking for a README.md file containing at least Usage and Model Card sections.
You can read more about the evaluation criteria in the document called Evaluation_Details.pdf in the download data section on the data page.
Introduction to Docker - Containerise your ML Models for Highwind - Part 1
Introduction to KServe - Serve your ML Models - Part 2
Deploy Your Models to Highwind - Part 3
Model Grading and Submitting to Zindi
The competition starts on 3 May 2024.
Registration for this competition closes on 19 August 2024 at 23:59 PM GMT.
The challenge closes on 1 September at 23:59 PM GMT.
Final submissions must be submitted to Zindi by 1 September at 23:58 PM GMT to be considered for scoring.
The private leaderboard will be revealed on 12 September 2024 at 23:59 PM GMT.
We reserve the right to update this timeline if necessary.
1st place: $3 000 USD
2nd place: $2 000 USD
3rd place: $1 000 USD
Top Female Team: $500 USD (if the top female team is in the top 3 places, this is awarded to the next top female or majority female team)
There are 6 000 Zindi points available. You can read more about Zindi points here.
The top female or majority female team will receive an additional 1 000 Zindi points.
The top 3 winners will have their solutions hosted on HighWind.
Once you have joined the Zindi competition, you will receive a welcome email from Highwind.
To access Highwind, visit the Highwind login page and select “Forget Password” to initiate a password reset. You will receive an email containing a link to create a new password. After resetting your password, you can use it to log into Highwind.
Follow the guidelines here on how to register on Highwind - docs.highwind.ai/zindi/register/
Please give a few minutes for your credentials to reflect after you enrolled on the competition before you log into Highwind.
How to enroll in your first Zindi competition
How to create a team on Zindi
How to update your profile on Zindi
How to use Colab on Zindi
How to mount a drive on Colab
This challenge is open to all.
Teams and collaboration
You may participate in competitions only as individua.
Multiple accounts per user are not permitted, and neither is collaboration or membership across multiple teams. Individuals and their submissions originating from multiple accounts will be immediately disqualified from the platform.
Code must not be shared privately outside of a team. Any code that is shared, must be made available to all competition participants through the platform. (i.e. on the discussion boards).
Datasets and packages
The solution must use publicly-available, open-source packages only.
You may use only the datasets provided for this competition. Automated machine learning tools such as automl are not permitted.
You may use pretrained models as long as they are openly available to everyone.
The data for this competition is under the license CC-BY-SA-4.0.
You can access the dataset here: https://huggingface.co/datasets/data354/Koumankan_mt_dyu_fr
You must notify Zindi immediately upon learning of any unauthorised transmission of or unauthorised access to the competition data, and work with Zindi to rectify any unauthorised transmission or access.
Your solution must not infringe the rights of any third party and you must be legally entitled to assign ownership of all rights of copyright in and to the winning solution code to Zindi and HighWind.
Submissions and winning
You may make a maximum of 10 submissions per day.
You may make a maximum of 75 submissions for this competition.
Before the end of the competition you need to choose 2 submissions to be judged on for the private leaderboard. If you do not make a selection your 2 best public leaderboard submissions will be used to score on the private leaderboard.
During the competition, your best public score will be displayed regardless of the submissions you have selected. When the competition closes your best private score out of the 2 selected submissions will be displayed.
Zindi maintains a public leaderboard and a private leaderboard for each competition. The Public Leaderboard includes approximately 25% of the test dataset. While the competition is open, the Public Leaderboard will rank the submitted solutions by the accuracy score they achieve. Upon close of the competition, the Private Leaderboard, which covers the other 75% of the test dataset, will be made public and will constitute the final ranking for the competition.
Note that to count, your submission must first pass processing. If your submission fails during the processing step, it will not be counted and not receive a score; nor will it count against your daily submission limit. If you encounter problems with your submission file, your best course of action is to ask for advice on the Competition’s discussion forum.
If you are in the top 10 at the time the leaderboard closes, we will email you to request your code. On receipt of email, you will have 48 hours to respond and submit your code following the Reproducibility of submitted code guidelines detailed below. Failure to respond will result in disqualification.
If your solution places 1st, 2nd, or 3rd on the final leaderboard, you will be required to submit your winning solution code to us for verification, and you thereby agree to assign all worldwide rights of copyright in and to such winning solution to Zindi.
If two solutions earn identical scores on the leaderboard, the tiebreaker will be the date and time in which the submission was made (the earlier solution will win).
The winners will be paid via bank transfer, PayPal if payment is less than or equivalent to $100, or other international money transfer platform. International transfer fees will be deducted from the total prize amount, unless the prize money is under $500, in which case the international transfer fees will be covered by Zindi. In all cases, the winners are responsible for any other fees applied by their own bank or other institution for receiving the prize money. All taxes imposed on prizes are the sole responsibility of the winners. The top winners or team leaders will be required to present Zindi with proof of identification, proof of residence and a letter from your bank confirming your banking details. Winners will be paid in USD or the currency of the competition. If your account cannot receive US Dollars or the currency of the competition then your bank will need to provide proof of this and Zindi will try to accommodate this.
Please note that due to the ongoing Russia-Ukraine conflict, we are not currently able to make prize payments to winners located in Russia. We apologise for any inconvenience that may cause, and will handle any issues that arise on a case-by-case basis.
Payment will be made after code review and sealing the leaderboard.
You acknowledge and agree that Zindi may, without any obligation to do so, remove or disqualify an individual, team, or account if Zindi believes that such individual, team, or account is in violation of these rules. Entry into this competition constitutes your acceptance of these official competition rules.
Zindi is committed to providing solutions of value to our clients and partners. To this end, we reserve the right to disqualify your submission on the grounds of usability or value. This includes but is not limited to the use of data leaks or any other practices that we deem to compromise the inherent value of your solution.
Zindi also reserves the right to disqualify you and/or your submissions from any competition if we believe that you violated the rules or violated the spirit of the competition or the platform in any other way. The disqualifications are irrespective of your position on the leaderboard and completely at the discretion of Zindi.
Please refer to the FAQs and Terms of Use for additional rules that may apply to this competition. We reserve the right to update these rules at any time.
A README markdown file is required
It should cover:
Your code needs to run properly, code reviewers do not have time to debug code. If code does not run easily you will be bumped down the leaderboard.
Consequences of breaking any rules of the competition or submission guidelines:
Monitoring of submissions
Join the largest network for
data scientists and AI builders