Primary competition visual

Specializing Large Language Models for Telecom Networks by ITU AI/ML in 5G Challenge

€6 000 EUR
Challenge completed over 1 year ago
Generative AI
457 joined
131 active
Starti
May 07, 24
Closei
Jul 26, 24
Reveali
Jul 26, 24
The Impact of Large Language Models on 6G and Beyond

Large language models (LLMs) have spearheaded a new era marked by sophisticated text generation, advanced comprehension, and dynamic interaction. The evolutionary path of LLMs originates from the early stages of machine learning (ML) and natural language processing (NLP), characterized by the emergence of statistical language models and the gradual evolution of neural networks. Yet, the true transformation came through deep learning (DL) breakthroughs, particularly the rise of transformer architectures. These innovations have paved the way for the birth of language models with an unprecedented ability to process and generate extensive volumes of comprehensive textual content. Among these remarkable strides, OpenAI's generative pre-trained Transformer (GPT) series has emerged as a beacon, outshining its predecessors in both scale and capability. This ascent has empowered these models to achieve human-like language understanding and generation.

While LLMs have undeniably demonstrated their prowess across diverse sectors, their integration into the telecommunications industry has been somewhat limited. However, this landscape is undergoing a gradual metamorphosis as researchers delve deeper into the potential of LLMs within this domain. With this competition, our objective is to tackle this challenge and pave the way for the development of telecom GPTs.

This challenge will adopt part of the recently developed TeleQnA dataset [1], composed by multiple-choice questions related to different classes of telecom knowledge domains [2], and will require participants to work on (at least one of) the following independent tasks:

  1. Specialize Falcon 7.5B on telecom knowledge: In this task, the participants will download Falcon 7.5B model [3] (https://huggingface.co/tiiuae/falcon-7b), and improve such model on their local computing facilities. The participants will be required to enhance the accuracy of the baseline model when answering to the multiple-choice questions included in the TeleQnA dataset by developing novel solutions or combining existing methods such as Retrieval Augmented Generation (RAG) and prompt engineering.
  2. Specialize Phi-2 on telecom knowledge: In this task, the participants will download Phi-2 [4] (https://huggingface.co/microsoft/phi-2), and improve such model on their local computing facilities. The participants will be required to enhance the baseline model accuracy when answering to the multiple-choice questions included in the TeleQnA dataset by developing novel solutions or combining existing methods such as fine tuning, RAG and prompt engineering.

When designing/implementing the RAG, the participants should not use the question’ option. Using the options in the RAG will be penalized during the evaluation process.

Challenges

Complexity and diversity of the questions in the dataset

  • Diversity serves as a valuable resource for evaluating the strengths of an LLM in various standard specifications of the telecommunication domain. Technical specifications are intricated documents that define the technical standards for telecommunications systems. Off-the-shelf LLMs have shown relatively limited capabilities in answering complex inquiries from these classes of questions.

LLM Hallucinations and Fabrications:

  • One of the key concerns with LLMs is their tendency to generate hallucinations or fabrications. LLMs rely on statistical patterns and associations learned from vast text data during training. Consequently, LLMs may produce responses that abide to these patterns, but are incorrect or nonexistent.

Limited explainability:

  • The complex architecture and massive number of parameters in LLMs render it difficult to trace the decision-making process. In fact, LLMs lack transparency in terms of the specific features or patterns they rely on to generate responses. This opacity hinders the ability to understand why a particular answer or response was chosen over others.
Organisations

About AI for Good - International Telecommunication Union (ITU)

AI for Good is organized by ITU in partnership with 40 UN Sister Agencies. The goal of AI for Good is to identify practical applications of AI to advance the United Nations Sustainable Development Goals and scale those solutions for global impact. It’s the leading action-oriented, global & inclusive United Nations platform on AI.

About Huawei Technologies (huawei.com)

Huawei Technologies is a multinational technology company dedicated to designing, developing, and providing high-quality telecommunications equipment and consumer electronics. It is now the world's largest supplier of telecommunications equipment, serving more than one-third of the world's population in more than 170 countries.

The Technology Innovation Institute (TII) is an Abu Dhabi government funded research institution that operates in the areas of artificial intelligence, quantum computing, autonomous robotics, cryptography, advanced materials, digital science, directed energy and secure systems. The institute is a part of the Abu Dhabi Government’s Advanced Technology Research Council.

Evaluation

The participants to the challenge will be evaluated based on three important criteria [40:10:50]:

  • The accuracy that their solutions will reach on the TeleQnA validation dataset. NOTE: The validation dataset is not part of the TeleQnA currently available online.
  • The readability and reproducibility of the delivered code.
  • The quality of the scientific paper, presenting the proposed solution, and its novelty, as well as the tackled problem(s) during the challenge, and the related experiments done.

To evaluate the accuracy of the model(s), the participants will be required to submit through our portal a csv file, whose format is as follows:

For Falcon 7.5B

Question_ID   Answer_ID    Task
0               2          Falcon 7.5B
1               3          Falcon 7.5B
2               5          Falcon 7.5B

For Phi-2

Question_ID   Answer_ID    Task
0               2          Phi-2
1               3          Phi-2
2               5          Phi-2

Where Question_ID, Answer_ID, and Task denote respectively 1) the question IDs of the test set for which the participants are providing answers through the LLM model 2) the answers selected by the model corresponding to the questions, and 3) the task to which the submission relates to, i.e., either Phi-2 or Falcon 7.5B.

The score of the submission will be evaluated in terms of Accuracy, i.e., the percentage of correctly provided answers in the test set.

In addition, the paper submission, acceptance, and presentation to the IEEE Globecom 2024 (https://globecom2024.ieee-globecom.org/) workshop (NOTE: at this stage, the workshop acceptance at IEEE Globecom is not yet been confirmed) associated with this challenge are strong requirements to win the competition, and they will demonstrate the participants capability to present their work, provide model explainability, and allow the community to reproduce their experiments. This challenge seeks to promote and encourage openness, rigor, reproducibility, and explainability of AI-based models; therefore, the participants will be strongly encouraged to share their solution on GitHub.

Timeline
  • Challenge starts: 7 May 2024
  • Competition closes: 26 July 2024 at 23:59 GMT
  • Workshop paper submission deadline: 12 August 2024
  • Paper acceptance notification: 1 September 2024
  • Camera ready: 1 October 2024

Resources

In addition to the MCQs, we will provide a supporting corpus of documents that the participants can use to enhance the accuracy of their model on the MCQs, e.g., 3GPP Rel 18 documents that can provide the LLMs context knowledge to answer the telecom questions in the dataset.

To participate in the competitions, participants will need to download Falcon-7B (https://huggingface.co/tiiuae/falcon-7b) and Phi-2 (https://huggingface.co/microsoft/phi-2), and run the model on their local computing facilities. Participants will have to develop novel solutions or combine existing methods such as RAG and prompt engineering.

  • An example of RAG for improving LLM knowledge on telecom standards is available at this link based on the work presented in this paper [5]
  • An example of prompt engineering for Phi-2 in the context of TeleQnA is available here including accuracy results [6]
  • An example of accuracy of Falcon-7B in the context of TeleQnA is available here [7]
  • An example of Phi-2 deployment and test using the RAG in [5] on TeleQnA is here.

Participants willing to work on Falcon-7B shall not use fine-tuning methods, which can be instead explored by participants working on Phi-2.

Based on a first come first served policy, we will provide a limited number of dedicated APIs to Falcon-7B, which will be freely available to the participants during the duration of the challenge.

To request computational resources for running Falcon-7B, please fill the following form https://forms.office.com/r/Dx2jN5SWG8

Paper Reference

[1] TeleQnA: https://github.com/netop-team/TeleQnA

[2] A. Maatouk, F. Ayed, N. Piovesan, A. De Domenico, M. Debbah, and Z.-Q. Luo, “Teleqna: A benchmark dataset to assess large language models telecommunications knowledge,” arXiv preprint arXiv:2310.15051, 2023 https://arxiv.org/pdf/2310.15051.pdf

[3] E. Almazrouei, H. Alobeidli, A. Alshamsi, A. Cappelli, R. Cojocaru, M. Debbah, E. Goffinet, D. Hesslow, J. Launay, Q. Malartic et al., “The falcon series of open language models,” arXiv preprint arXiv:2311.16867, 2023 https://arxiv.org/pdf/2311.16867.pdf

[4] M. Javaheripi, and S. Bubeck, “Phi-2: The surprising power of small language models,” Dec. 2023 https://www.microsoft.com/en-us/research/blog/phi-2-the-surprising-power-of-small-language-models/

[5] A.-L. Bornea, F. Ayed, A. De Domenico, N. Piovesan, A. Maatouk, “Telco-RAG: Navigating the Challenges of Retrieval-Augmented Language Models for Telecommunications” arXiv preprint arXiv:2404.15939, 2024 https://arxiv.org/abs/2404.15939

[6] N. Piovesan, A. De Domenico, and F. Ayed, “Telecom language models: Must they be large?” arXiv preprint arXiv:2401.08406, 2024.

[7] T. Ahmed, N. Piovesan, A. De Domenico, S. Choudhury, “Linguistic Intelligence in Large Language Models for Telecommunications” arXiv preprint arXiv:2402.15818, 2024 https://arxiv.org/pdf/2402.15818

Prizes

First place: 1 500 Euro

Second place: 1 000 Euro

Third place: 500 Euro

Per task. This means there is a total of 6 000 Euros.

There are 5 000 Zindi points available. You can read more about Zindi points here.

How to get started with Zindi

How to enroll in your first Zindi competition

How to create a team on Zindi

How to update your profile on Zindi

How to use Colab on Zindi

How to mount a drive on Colab

Rules

This challenge is open to all.

Teams and collaboration

You may participate in competitions as an individual or in a team of up to four people. When creating a team, the team must have a total submission count less than or equal to the maximum allowable submissions as of the formation date. A team will be allowed the maximum number of submissions for the competition, minus the total number of submissions among team members at team formation. Prizes are transferred only to the individual players or to the team leader.

Multiple accounts per user are not permitted, and neither is collaboration or membership across multiple teams. Individuals and their submissions originating from multiple accounts will be immediately disqualified from the platform.

Code must not be shared privately outside of a team. Any code that is shared, must be made available to all competition participants through the platform. (i.e. on the discussion boards).

The Zindi data scientist who sets up a team is the default Team Leader but they can transfer leadership to another data scientist on the team. The Team Leader can invite other data scientists to their team. Invited data scientists can accept or reject invitations. Until a second data scientist accepts an invitation to join a team, the data scientist who initiated a team remains an individual on the leaderboard. No additional members may be added to teams within the final 5 days of the competition or last hour of a hackathon.

The team leader can initiate a merge with another team. Only the team leader of the second team can accept the invite. The default team leader is the leader from the team who initiated the invite. Teams can only merge if the total number of members is less than or equal to the maximum team size of the competition.

A team can be disbanded if it has not yet made a submission. Once a submission is made individual members cannot leave the team.

All members in the team receive points associated with their ranking in the competition and there is no split or division of the points between team members.

Datasets and packages

The solution must use publicly-available, open-source packages only.

You may use only the datasets provided for this competition. Automated machine learning tools such as automl are not permitted.

You may use pretrained models as long as they are openly available to everyone.

The data used in this competition is the sole property of Zindi and the competition host. You may not transmit, duplicate, publish, redistribute or otherwise provide or make available any competition data to any party not participating in the Competition (this includes uploading the data to any public site such as Kaggle or GitHub). You may upload, store and work with the data on any cloud platform such as Google Colab, AWS or similar, as long as 1) the data remains private and 2) doing so does not contravene Zindi’s rules of use.

You must notify Zindi immediately upon learning of any unauthorised transmission of or unauthorised access to the competition data, and work with Zindi to rectify any unauthorised transmission or access.

Your solution must not infringe the rights of any third party and you must be legally entitled to assign ownership of all rights of copyright in and to the winning solution code to Zindi.

Submissions and winning

You may make a maximum of 10 submissions per day.

You may make a maximum of 300 submissions for this competition.

Before the end of the competition you need to choose 2 submissions to be judged on for the private leaderboard. If you do not make a selection your 2 best public leaderboard submissions will be used to score on the private leaderboard.

During the competition, your best public score will be displayed regardless of the submissions you have selected. When the competition closes your best private score out of the 2 selected submissions will be displayed.

Zindi maintains a public leaderboard and a private leaderboard for each competition. The Public Leaderboard includes approximately 50% of the test dataset. While the competition is open, the Public Leaderboard will rank the submitted solutions by the accuracy score they achieve. Upon close of the competition, the Private Leaderboard, which covers the 100% of the test set calculated on the WMAPE error metric, will be made public and will constitute the final ranking for the competition.

Note that to count, your submission must first pass processing. If your submission fails during the processing step, it will not be counted and not receive a score; nor will it count against your daily submission limit. If you encounter problems with your submission file, your best course of action is to ask for advice on the Competition’s discussion forum.

If you are in the top 10 at the time the leaderboard closes, we will email you to request your code. On receipt of email, you will have 48 hours to respond and submit your code following the Reproducibility of submitted code guidelines detailed below. Failure to respond will result in disqualification.

If your solution receives a prize placement, you will be required to submit your winning solution code to us for verification, and you thereby agree to assign all worldwide rights of copyright in and to such winning solution to Zindi.

If two solutions earn identical scores on the leaderboard, the tiebreaker will be the date and time in which the submission was made (the earlier solution will win).

The winners will be paid via bank transfer, PayPal, or other international money transfer platform. International transfer fees will be deducted from the total prize amount, unless the prize money is under $500, in which case the international transfer fees will be covered by Zindi. In all cases, the winners are responsible for any other fees applied by their own bank or other institution for receiving the prize money. All taxes imposed on prizes are the sole responsibility of the winners. The top 3 winners or team leaders will be required to present Zindi with proof of identification, proof of residence and a letter from your bank confirming your banking details.Winners will be paid in USD or the currency of the competition. If your account cannot receive US Dollars or the currency of the competition then your bank will need to provide proof of this and Zindi will try to accommodate this.

Payment will be made after code review and sealing the leaderboard.

You acknowledge and agree that Zindi may, without any obligation to do so, remove or disqualify an individual, team, or account if Zindi believes that such individual, team, or account is in violation of these rules. Entry into this competition constitutes your acceptance of these official competition rules.

Zindi is committed to providing solutions of value to our clients and partners. To this end, we reserve the right to disqualify your submission on the grounds of usability or value. This includes but is not limited to the use of data leaks or any other practices that we deem to compromise the inherent value of your solution.

Zindi also reserves the right to disqualify you and/or your submissions from any competition if we believe that you violated the rules or violated the spirit of the competition or the platform in any other way. The disqualifications are irrespective of your position on the leaderboard and completely at the discretion of Zindi.

Please refer to the FAQs and Terms of Use for additional rules that may apply to this competition. We reserve the right to update these rules at any time.

Reproducibility of submitted code

  • If your submitted code does not reproduce your score on the leaderboard, we reserve the right to adjust your rank to the score generated by the code you submitted.
  • If your code does not run you will be dropped from the top 10. Please make sure your code runs before submitting your solution.
  • Always set the seed. Rerunning your model should always place you at the same position on the leaderboard. When running your solution, if randomness shifts you down the leaderboard we reserve the right to adjust your rank to the closest score that your submission reproduces.
  • Custom packages in your submission notebook will not be accepted.
  • You may only use tools available to everyone i.e. no paid services or free trials that require a credit card.

Documentation

A README markdown file is required

It should cover:

  • How to set up folders and where each file is saved
  • Order in which to run code
  • Explanations of features used
  • Environment for the code to be run (conda environment.yml file or an environment.txt file)
  • Hardware needed (e.g. Google Colab or the specifications of your local machine)
  • Expected run time for each notebook. This will be useful to the review team for time and resource allocation.

Your code needs to run properly, code reviewers do not have time to debug code. If code does not run easily you will be bumped down the leaderboard.

Consequences of breaking any rules of the competition or submission guidelines:

  • First offence: No prizes for 6 months and 2000 points will be removed from your profile (probation period). If you are caught cheating, all individuals involved in cheating will be disqualified from the challenge(s) you were caught in and you will be disqualified from winning any competitions for the next six months and 2000 points will be removed from your profile. If you have less than 2000 points to your profile your points will be set to 0.
  • Second offence: Banned from the platform. If you are caught for a second time your Zindi account will be disabled and you will be disqualified from winning any competitions or Zindi points using any other account.
  • Teams with individuals who are caught cheating will not be eligible to win prizes or points in the competition in which the cheating occurred, regardless of the individuals’ knowledge of or participation in the offence.
  • Teams with individuals who have previously committed an offence will not be eligible for any prizes for any competitions during the 6-month probation period.

Monitoring of submissions

  • We will review the top 10 solutions of every competition when the competition ends.
  • We reserve the right to request code from any user at any time during a challenge. You will have 24 hours to submit your code following the rules for code review (see above). Zindi reserves the right not to explain our reasons for requesting code. If you do not submit your code within 24 hours you will be disqualified from winning any competitions or Zindi points for the next six months. If you fall under suspicion again and your code is requested and you fail to submit your code within 24 hours, your Zindi account will be disabled and you will be disqualified from winning any competitions or Zindi points with any other account.