Primary competition visual

Telco Troubleshooting Agentic Challenge

€40 000 EUR
~1 month left
Agentic AI
Fine-tuning
Large Language Models
485 joined
37 active
Starti
Apr 17, 26
Closei
May 18, 26
Reveali
May 29, 26
Can you build an AI agent that understands and optimizes telecom networks?

Note the challenge has officially begun, and participants can start exploring the data in the Data section. Submissions will open on the 17th April.

Telco Troubleshooting Agentic Challenge focuses on network operations and maintenance. Participants are required to build intelligent agents that complete network fault diagnosis and troubleshooting tasks in the contest of wireless and IP networks. The goal will be to design agent capabilities (simulations, memory, RAG, skills) and processes (fine-tuning, CoT) enabling them to deal with telecom network understanding and optimisation. The entire competition uses Qwen3.5-35B-A3B as the base model. Participants may fine-tune the model but are not allowed to replace it with a different architecture or a model of a different parameter scale.

The competition is organised around three phases and 2 tracks:

  • The first phase (Phase 1) will start the 3rd of April and finish on the 4th of May. During this phase the participants will use the available data and software to design their agent. Participants can continuously submit the answers generated by their agent and monitor the public leaderboard.
  • The second phase (Phase 2) will run from the 4th of May till the 18th of May. In this phase, participants will test their solution on new data (from the same distribution of that provided during phase 1). The top 30 solutions of each track on the public leaderboard will be selected for the final phase. These top teams will need to submit their agent for the final evaluation.
  • The third phase (Phase 3) will run from the 18th of May till the 29th of May, when the winners and the private leaderboard with the scores of the top-30 participants will be revealed.

The 2 tracks (Track A and Track B) focus on wireless and IP network troubleshooting problems, respectively.

Access the Global Launch here

This challenge is brought to you in partnership with the world's leading community organisations:

Supported by headline and technology partners:

Track Overview

Track A overview (refer to the readme for further information):

  • Participants are required to build intelligent agents that solve wireless tasks by calling the simulation interfaces provided by the Agent Tool Server (server.py).
  • The challenge will leverage a dedicated network environment capable of modeling realistic, real-world telecom scenarios, together with a central orchestration framework that
  • Exposes structured tasks to participating agents
  • Connects agents to domain-specific tools APIs
  • Coordinates interactions between agents and the network environment
  • Tasks could be single-answer questions or multiple-answer questions. The answers of multiple-answer questions should be separated by "|", e.g., 'C3|C7' or 'C5|C9|C11|C20', in ascending order.
  • The number of questions will be different during the 3 phases. In Phase 1, we will provide a training set with 2000 questions and answers, and a test set with 500 questions. In Phase 2, we will share a new test set with 500 questions. 500 new questions will be used to produce the final leaderboard in Phase 3.

Track B overview (refer to the readme for further information):

  • Participants are required to build intelligent agents that solve IP tasks by calling the simulation interfaces provided by the Agent Tool Server (server.py).
  • The challenge will leverage a dedicated multi-vendor network environment capable of
  • Exposing structured tasks to participating agents
  • Connecting agents to domain-specific tools APIs
  • Simulating CLI interactions for IP network devices (Huawei / Cisco / H3C)
  • Track B will focus on open-ended questions.
  • The number of questions will be different during the 3 phases. In Phase 1, we will provide 50 questions. In Phase 2, we will share a new test set of 100 questions (released in batches of 20 problems every 3 days). 70 new questions will be used to produce the final leaderboard in Phase 3.
  • During Phase 1 and Phase 2, participants are required to submit the generated answers to the 50 and 100 questions respectively (result.csv), which will be used to produce the leaderboard score.
  • During Phase 1,
  • Participants may run the server on their local machine or access to it through API calls
  • Agent calls the remote Agent Tool Server via HTTP and each request must include the Authorization header
  • The server source code and local deployment guide will be provided to help participants deploy locally
  • For the participants that use the cloud server, max 1,000 API calls per participant per day are allowed (only Phase 1 enforces this daily quota; Phase 2 and Phase 3 do not have this restriction)
  • During Phase 2, each participant is allowed only one submission run; the execution trace must be uploaded to the server. Unlike Phase 1, there is no daily API call quota, but participants must ensure their single run completes successfully.
Evaluation

During Phase 1 and Phase 2, participants are required to submit the answers produced (result.csv), which will be used to produce the leaderboard score.

Track A:

  • In Phase 1 and Phase 2, participants are required to submit the answers produced to each set of 500 questions.
  • For each question, the accuracy is modelled by computing the intersection over union, that is, intersection (answers, ground truth) / union(answers, ground truth).
  • During Phase 2, participants can make up to three submissions and the best of the 3 will be considered.

Track B:

  • In Phase 1 and Phase 2, participants are required to submit the answers produced to 50 and 70 questions, respectively.
  • The answer is correct if prediction is equal to ground truth.
  • During Phase 2, participants can make up to three submissions and the best of the 3 will be considered.

The top 30 solutions of each track on the public leaderboard will be selected for Phase 3. During Phase 3, the score for each question will be multiplied by a discount factor, to measure the efficiency of the proposed agent , computed as follows:

| Answering time            | Discount |
| ------------------------- | -------- |
| `< 5 minutes`             | 100%     |
| `5 minutes  - 10 minutes` | 80%      |
| `10 minutes - 15 minutes` | 60%      |
| `> 15 minutes`            | 0%       |
Prizes

Track A

🥇 1st prize: EUR 12 500 + leader pass (worth ¥2,995), and up to 3 500 USD for covering travel and accommodation costs to attend MCW Shanghai in June 2026 and presenting your work to the AI for network community for one representative

🥈2nd prize: EUR 5 000

🥉3rd prize: EUR 2 500

Track B

🥇 1st prize: EUR 12 500 + leader pass (worth ¥2,995), and up to 3 500 USD for covering travel and accommodation costs to attend MCW Shanghai in June 2026 and presenting your work to the AI for network community for one representative

🥈2nd prize: EUR 5 000

🥉3rd prize: EUR 2 500

! NB: we will award a maximum of 1 1st place prize per participant or team - each person or team can only win in one track. Winning in Track A makes you ineligible for Track B and vice versa.

There are 10 000 Zindi points available. You can read more about Zindi points here.

Rules
  • Languages and tools: You may only use open-source languages and tools in building models for this challenge.
  • Who can compete: Open to all
  • Submission Limits: see the description of the 2 tracks.
  • Team size: Max team size of 4
  • Public-Private Split: Zindi maintains a public leaderboard and a private leaderboard for each challenge. The Public Leaderboard includes approximately 30% of the test dataset. The private leaderboard will be revealed at the close of the challenge and contains the remaining 70% of the test set.
  • Data Sharing: CC-BY SA 4.0 license
  • Code Review: Top 30 on the public leaderboard will receive an email requesting their code at the close of the challenge. You will have 48 hours to submit your code.
  • Code sharing: Multiple accounts, or sharing of code and information across accounts not in teams, is not allowed and will lead to disqualification.

ENTRY INTO THIS CHALLENGE CONSTITUTES YOUR ACCEPTANCE OF THESE OFFICIAL CHALLENGE RULES.

Full Challenge Rules

This challenge is open to all.

Teams and collaboration

You may participate in challenges as an individual or in a team of up to four people. When creating a team, the team must have a total submission count less than or equal to the maximum allowable submissions as of the formation date. A team will be allowed the maximum number of submissions for the challenge, minus the total number of submissions among team members at team formation. Prizes are transferred only to the individual players or to the team leader.

Multiple accounts per user are not permitted, and neither is collaboration or membership across multiple teams. Individuals and their submissions originating from multiple accounts will be immediately disqualified from the platform.

Code must not be shared privately outside of a team. Any code that is shared, must be made available to all challenge participants through the platform. (i.e. on the discussion boards).

The Zindi data scientist who sets up a team is the default Team Leader but they can transfer leadership to another data scientist on the team. The Team Leader can invite other data scientists to their team. Invited data scientists can accept or reject invitations. Until a second data scientist accepts an invitation to join a team, the data scientist who initiated a team remains an individual on the leaderboard. No additional members may be added to teams within the final 5 days of the challenge or last hour of a hackathon.

The team leader can initiate a merge with another team. Only the team leader of the second team can accept the invite. The default team leader is the leader from the team who initiated the invite. Teams can only merge if the total number of members is less than or equal to the maximum team size of the challenge.

A team can be disbanded if it has not yet made a submission. Once a submission is made individual members cannot leave the team.

All members in the team receive points associated with their ranking in the challenge and there is no split or division of the points between team members.

Datasets, packages and general principles

The solution must use publicly-available, open-source packages only.

You may use only the datasets provided for this challenge.

Automated machine learning tools such as automl are not permitted.

You are allowed to access, use and share challenge data for any commercial, non-commercial, research or education purposes, under a CC-BY SA 4.0 license.

You must notify Zindi immediately upon learning of any unauthorised transmission of or unauthorised access to the challenge data, and work with Zindi to rectify any unauthorised transmission or access.

Your solution must not infringe the rights of any third party and you must be legally entitled to assign ownership of all rights of copyright in and to the winning solution code to Zindi.

Submissions and winning

Before the end of the challenge you need to choose 2 submissions to be judged on for the private leaderboard. If you do not make a selection your 2 best public leaderboard submissions will be used to score on the private leaderboard.

During the challenge, your best public score will be displayed regardless of the submissions you have selected. When the challenge closes your best private score out of the 2 selected submissions will be displayed.

Zindi maintains a public leaderboard and a private leaderboard for each challenge. The Public Leaderboard includes approximately 20% of the test dataset. While the challenge is open, the Public Leaderboard will rank the submitted solutions by the accuracy score they achieve. Upon close of the challenge, the Private Leaderboard, which covers the other 80% of the test dataset, will be made public and will constitute the final ranking for the challenge.

Note that to count, your submission must first pass processing. If your submission fails during the processing step, it will not be counted and not receive a score; nor will it count against your daily submission limit. If you encounter problems with your submission file, your best course of action is to ask for advice on the challenge page.

If you are in the top 10 at the time the leaderboard closes, we will email you to request your code. On receipt of email, you will have 48 hours to respond and submit your code following the Reproducibility of submitted code guidelines detailed below. Failure to respond will result in disqualification.

If your solution places 1st, 2nd, or 3rd on the final leaderboard, you will be required to submit your winning solution code to us for verification, and you thereby agree to assign all worldwide rights of copyright in and to such winning solution to Zindi.

If two solutions earn identical scores on the leaderboard, the tiebreaker will be the date and time in which the submission was made (the earlier solution will win).

The winners will be paid via bank transfer, PayPal if payment is less than or equivalent to $100, or other international money transfer platform. International transfer fees will be deducted from the total prize amount, unless the prize money is under $500, in which case the international transfer fees will be covered by Zindi. In all cases, the winners are responsible for any other fees applied by their own bank or other institution for receiving the prize money. All taxes imposed on prizes are the sole responsibility of the winners. The top winners or team leaders will be required to present Zindi with proof of identification, proof of residence and a letter from your bank confirming your banking details. Winners will be paid in USD or the currency of the challenge. If your account cannot receive US Dollars or the currency of the challenge then your bank will need to provide proof of this and Zindi will try to accommodate this.

Please note that due to the ongoing Russia-Ukraine conflict, we are not currently able to make prize payments to winners located in Russia. We apologise for any inconvenience that may cause, and will handle any issues that arise on a case-by-case basis.

Payment will be made after code review and sealing the leaderboard.

You acknowledge and agree that Zindi may, without any obligation to do so, remove or disqualify an individual, team, or account if Zindi believes that such individual, team, or account is in violation of these rules. Entry into this challenge constitutes your acceptance of these official challenge rules.

Zindi is committed to providing solutions of value to our clients and partners. To this end, we reserve the right to disqualify your submission on the grounds of usability or value. This includes but is not limited to the use of data leaks or any other practices that we deem to compromise the inherent value of your solution.

Zindi also reserves the right to disqualify you and/or your submissions from any challenge if we believe that you violated the rules or violated the spirit of the challenge or the platform in any other way. The disqualifications are irrespective of your position on the leaderboard and completely at the discretion of Zindi.

Please refer to the FAQs and Terms of Use for additional rules that may apply to this challenge. We reserve the right to update these rules at any time.

Reproducibility of submitted code

If your submitted code does not reproduce your score on the leaderboard, we reserve the right to adjust your rank to the score generated by the code you submitted.

If your code does not run you will be dropped from the top 10. Please make sure your code runs before submitting your solution.

Always set the seed. Rerunning your model should always place you at the same position on the leaderboard. When running your solution, if randomness shifts you down the leaderboard we reserve the right to adjust your rank to the closest score that your submission reproduces.

Custom packages in your submission notebook will not be accepted.

You may only use tools available to everyone i.e. no paid services or free trials that require a credit card.

Consequences of breaking any rules of the challenge or submission guidelines:

  • First offence: No prizes for 6 months and 2000 points will be removed from your profile (probation period). If you are caught cheating, all individuals involved in cheating will be disqualified from the challenge(s) you were caught in and you will be disqualified from winning any challenges for the next six months and 2000 points will be removed from your profile. If you have less than 2000 points to your profile your points will be set to 0.
  • Second offence: Banned from the platform. If you are caught for a second time your Zindi account will be disabled and you will be disqualified from winning any challenges or Zindi points using any other account.

Teams with individuals who are caught cheating will not be eligible to win prizes or points in the challenge in which the cheating occurred, regardless of the individuals’ knowledge of or participation in the offence.

Teams with individuals who have previously committed an offence will not be eligible for any prizes for any challenges during the 6-month probation period.

Monitoring of submissions

We will review the top 30 solutions of each track when the challenge ends.

We reserve the right to request code from any user at any time during a challenge. You will have 24 hours to submit your code following the rules for code review (see above). Zindi reserves the right not to explain our reasons for requesting code. If you do not submit your code within 24 hours you will be disqualified from winning any challenges or Zindi points for the next six months. If you fall under suspicion again and your code is requested and you fail to submit your code within 24 hours, your Zindi account will be disabled and you will be disqualified from winning any challenges or Zindi points with any other account.