Primary competition visual

The AI Telco Troubleshooting Challenge

€35 000 EUR
Completed (~1 month ago)
Root Cause Analysis
Fault Detection
Edge AI
Anomaly Detection
Large Language Models
1254 joined
253 active
Starti
Nov 28, 25
Closei
Feb 01, 26
Reveali
Feb 02, 26
Some top leaderboard participants just joined Zindi less than 1 week, even 1 day!
2 Feb 2026, 02:19 · 21

It's almost impossible to train models and got good score from scratch in such a short period. I feel it's unfaire for people who started building solutions since phase 1.

Discussion 21 answers
User avatar
Aman_Deva

Yeah I noticed it too, some guys joined 2 hours ago and get into top of the leaderboard

2 Feb 2026, 02:25
Upvotes 3

Plus the unreal high score, someone must be cheating aggressively to be the top 10.

d

2 Feb 2026, 02:27
Upvotes 1
User avatar
Greenpark

Small LLM can never get high points on the general questions no matter how you train, those high scores must came from gimini-3 or claude-opus-4.5

2 Feb 2026, 02:35
Upvotes 4

only 80 general questions in the test dataset, they can solve the general questions by themselves...

You have a high score too. I was surprised see someone else surpass you on the leaderboard.

I tried participating but im using free Google collab with less GPU so takes time running and I join late too

I will like we interact and form bond also I would love to know your solution to this hackathon

Yeah I completely agree!This is indeed unfair to the contestants who have been working hard since the first stage. Additionally, it would be important to clarify or verify whether any teams might have included the test set in their training data. We hope the organizers can ensure fairness for all participants.

2 Feb 2026, 02:39
Upvotes 2

You took the words right out of my mouth.

Really? 1 day?

2 Feb 2026, 03:09
Upvotes 0

Yes, you can simiply click the user for the top scorers to find those guys.

Any suggestion to find cheating, such as fine-tuning on the test dataset?

2 Feb 2026, 03:49
Upvotes 0

First, some submissions with unreal high scores won't be able reproduce same score for the phase 2 test at all. Because they will fail the general test questions especially for Qwen 2.5 1.5 B.

Second, I think you mean the new unseen questions type like A-I reasons. To find cheating on fine-tuning on the new questions, maybe we can compare the accuracy of both types (C1-C8 and A-I). If they are at same level, that's a sign that the model has been trained on the 'unseen' questions.

Sorry but its not very clear to me why training on test data is considered cheating? Did the host ever say its not allowed? If so, please link the discussion, maybe I missed it.

User avatar
Koleshjr
Multimedia university of kenya

Thanks. I didn't see that discussion.

A phase 3 may be introduced, where the top N participants are required to submit their models for evaluation against a held-out dataset (unpublished dataset).

Creating new dataset from training dataset is allowed, so I think it's reasonable that the accuracy of both types (C1-C8 and A-I) are at same level.

User avatar
agboola_yusuf

I don't this has any issue provided they every law is in place, everything is about knowledge, striving and exposure, they might be familiar with d task b4, and came In lately,and anyone can do that, just make sure u release 1st place and other high score solution publicly else it's unbearable(cheating)

2 Feb 2026, 06:16
Upvotes 0

Hello, there is no restriction on the registration data. So we will not take action about this. What instead we plan to do is to have rigorous eval process to ensure fainess and rule matching in the submissions. Thanks everyone for your participation

2 Feb 2026, 10:12
Upvotes 2

All these complains but why son few of you share the final contribution. It is a pity

4 Feb 2026, 09:25
Upvotes 0

As to our understanding, the first 10 competitors would only share their contribution.