Primary competition visual

Measuring What Matters Proposal Challenge by ITU

1 000 CHF
20 days left
Research
Writing
AI
Metric
Analytics
Markdown
156 joined
31 active
Starti
Oct 24, 25
Closei
Nov 21, 25
Reveali
Dec 12, 25
Can you propose new ways to measure the environmental footprint of AI models?

AI systems are transforming the world — accelerating innovation, improving efficiency, and enabling new forms of creativity. But they also consume vast amounts of energy, water, and materials. From the training of large models to their everyday use and eventual hardware disposal, AI’s environmental footprint is growing rapidly.

Despite rising awareness, there is still no unified, transparent, and standardised way to measure or report AI’s true environmental impact. Different studies use different methods, metrics, and assumptions, leaving policymakers, developers, and users without the consistent data they need to make informed, sustainable decisions.

In partnership with ITU, we invite your best ideas and proposals to help change this situation and reduce the environmental impact of AI systems.

Your task is to propose practical, forward-thinking solutions that make AI’s environmental footprint measurable, transparent, and accountable.

You will submit a written proposal (in Markdown format) that addresses one of the five challenge tracks below. You can tackle more than one track by submitting separate proposals for each one.

Your proposal can be conceptual or technical, but it should clearly:

  • Identify the problem you are addressing and explain why it matters.
  • Present a clear methodology or conceptual design for your proposed solution.
  • Include examples of data, tools or systems you would use or need.
  • Describe how it contributes to transparency, accountability, or sustainability in AI.
  • Outline any data sources, tools or collaborations you would use for implementation.

You should read the ITU report titled "Measuring what matters: How to assess AI's environmental impact" and the references therein as a starting point for your submission.

Challenge Tracks

1. Real-Time Telemetry & Monitoring Tools

Focus: How can we measure AI’s true energy, water, and carbon costs in real time?

Expected output: A conceptual design, prototype, or framework for a monitoring tool or API that tracks model energy and emissions during training or inference.

Examples: live dashboards, telemetry standards, or plug-ins that log and visualise energy use.

2. Emission Attribution & Amortisation Frameworks

Focus: How can we fairly allocate AI’s training emissions across its lifetime usage?

Expected output: A model, algorithm, or policy framework for calculating per-inference or per-user carbon impact, based on training cost, model lifespan, and usage volume.

Examples: lifecycle carbon calculators or amortisation guidelines for standard reporting.

3. Measurement Standardisation & Benchmarking

Focus: Different models and data centres measure differently — how can we compare them?

Expected output: A lightweight methodology or benchmark framework that defines common units (e.g., CO₂e/token, kWh/inference) and enables standardised comparison across hardware, platforms, or model types.

Examples: open metrics libraries, validation frameworks, or proposed international standards.

4. Green AI Architecture & Model Design

Focus: How can we design models that achieve high performance with lower energy use?

Expected output: A conceptual or technical plan for more frugal model architectures, efficient training methods, or data-optimised AI pipelines.

Examples: parameter-efficient architectures, energy-aware training strategies, or hardware-software co-optimisation methods.

5. Supply Chain Transparency & Scope 3 Mapping

Focus: What are the hidden environmental costs of AI’s hardware and data infrastructure?

Expected output: A framework, dataset, or visualisation approach that traces embodied carbon, material use, or e-waste across the AI hardware lifecycle.

Examples: open databases for GPU lifecycle emissions, visualisation maps of responsible sourcing, or cross-supplier traceability frameworks.

Submission format:

  • File type: Markdown
  • Length: 4 - 6 pages (excluding references or annexes)
  • Naming convention: <username>_<track_number>.md
  • Example: User15_track_1.md if your username is User15 and you are addressing Track 1.
  • Recommended structure:
  • Title and chosen track
  • Problem statement and background
  • Proposed solution or framework
  • Implementation plan or methodology
  • Expected impact and contribution to sustainable AI
  • References

About AI for Good - International Telecommunication Union (ITU)

AI for Good is organized by ITU in partnership with 40 UN Sister Agencies. The goal of AI for Good is to identify practical applications of AI to advance the United Nations Sustainable Development Goals and scale those solutions for global impact. It’s the leading action-oriented, global & inclusive United Nations platform on AI.

Evaluation

This challenge will be evaluated in two phases - initial leaderboard position will be determined by preliminary rubric evaluation, and final winners will be decided based on expert panel evaluation of the top 10 Phase One scores in each track. Evaluation rubrics for Phase One and Phase Two are below.

Phase One: Preliminary rubric evaluation

The goal of this stage is to assess completeness, clarity, and relevance, not technical depth.

Your initial leaderboard score will be awarded, according to the following rubric:

  • Track alignment (20%): Does the proposal clearly identify which challenge track it addresses and stays relevant?
  • Problem definition & significance (20%): Is the problem clearly stated? Does it demonstrate understanding of AI’s environmental impact?
  • Methodology & conceptual design (25%): Does the proposal present a structured approach or framework for the solution?
  • Contribution to transparency, accountability or sustainability (20%): Does the proposal clearly explain how to enhance measurement, reporting, or environmental responsibility in AI systems?
  • Implementation readiness (10%): Does the proposal outline possible data sources, tools, or collaborations showing an understanding of feasibility?
  • Format & clarity (5%): Is the proposal well structured, clear and readable?

Phase Two: Final rubric evaluation

If you achieve a score in the top 10 for a specific topic in Phase One, your proposal will be reviewed in Phase Two by an expert panel, according to the following rubric:

  • Innovation & originality (25%): Based on your experience, does the proposal introduce a novel approach to measuring or reducing AI’s environmental impact ?
  • Technical feasibility & methodological soundness (25%): Is the proposal logical, implementable, and based on sound reasoning or evidence?
  • Environmental Impact & Relevance (25%): Does implementing the proposal clearly improve sustainability, transparency, or accountability in AI systems?
  • Clarity & Communication (25%): Is the proposal well presented, logically organised and easy to understand?

Rules
  • Languages and tools: You may only use open-source languages and tools in building models for this challenge.
  • Who can compete: Open to all
  • Submission Limits: 3 submissions per day, 20 submissions overall.
  • Team size: Max team size of 4
  • Public-Private Split: Zindi maintains a public leaderboard and a private leaderboard for each challenge. The Public Leaderboard will show the score of the preliminary evaluation and the private leaderboard that of the secondary evaluation by a panel of experts. The private leaderboard will be revealed after the secondary evaluation is complete.
  • Data Sharing: CC-BY SA 4.0 license

ENTRY INTO THIS CHALLENGE CONSTITUTES YOUR ACCEPTANCE OF THESE OFFICIAL CHALLENGE RULES.

Full Challenge Rules

This challenge is open to all.

Teams and collaboration

You may participate in challenges as an individual or in a team of up to four people. When creating a team, the team must have a total submission count less than or equal to the maximum allowable submissions as of the formation date. A team will be allowed the maximum number of submissions for the challenge, minus the total number of submissions among team members at team formation. Prizes are transferred only to the individual players or to the team leader.

Multiple accounts per user are not permitted, and neither is collaboration or membership across multiple teams. Individuals and their submissions originating from multiple accounts will be immediately disqualified from the platform.

Code must not be shared privately outside of a team. Any code that is shared, must be made available to all challenge participants through the platform. (i.e. on the discussion boards).

The Zindi data scientist who sets up a team is the default Team Leader but they can transfer leadership to another data scientist on the team. The Team Leader can invite other data scientists to their team. Invited data scientists can accept or reject invitations. Until a second data scientist accepts an invitation to join a team, the data scientist who initiated a team remains an individual on the leaderboard. No additional members may be added to teams within the final 5 days of the challenge or last hour of a hackathon.

The team leader can initiate a merge with another team. Only the team leader of the second team can accept the invite. The default team leader is the leader from the team who initiated the invite. Teams can only merge if the total number of members is less than or equal to the maximum team size of the challenge.

A team can be disbanded if it has not yet made a submission. Once a submission is made individual members cannot leave the team.

All members in the team receive points associated with their ranking in the challenge and there is no split or division of the points between team members.

Datasets, packages and general principles

The solution must use publicly-available, open-source packages only.

You may use only the datasets provided for this challenge.

You may use pretrained models as long as they are openly available to everyone.

Automated machine learning tools such as automl are not permitted.

If the error metric requires probabilities to be submitted, do not set thresholds (or round your probabilities) to improve your place on the leaderboard. In order to ensure that the client receives the best solution Zindi will need the raw probabilities. This will allow the clients to set thresholds to their own needs.

You are allowed to access, use and share challenge data for any commercial, non-commercial, research or education purposes, under a CC-BY SA 4.0 license.

You must notify Zindi immediately upon learning of any unauthorised transmission of or unauthorised access to the challenge data, and work with Zindi to rectify any unauthorised transmission or access.

Your solution must not infringe the rights of any third party and you must be legally entitled to assign ownership of all rights of copyright in and to the winning solution code to Zindi.

Submissions and winning

You may make a maximum of 3 submissions per day.

You may make a maximum of 20 submissions for this challenge.

Before the end of the challenge you need to choose 2 submissions to be judged on for the private leaderboard. If you do not make a selection your 2 best public leaderboard submissions will be used to score on the private leaderboard.

During the challenge, your best public score will be displayed regardless of the submissions you have selected. When the challenge closes your best private score out of the 2 selected submissions will be displayed.

Zindi maintains a public leaderboard and a private leaderboard for each challenge. The Public Leaderboard includes approximately 20% of the test dataset. While the challenge is open, the Public Leaderboard will rank the submitted solutions by the accuracy score they achieve. Upon close of the challenge, the Private Leaderboard, which covers the other 80% of the test dataset, will be made public and will constitute the final ranking for the challenge.

Note that to count, your submission must first pass processing. If your submission fails during the processing step, it will not be counted and not receive a score; nor will it count against your daily submission limit. If you encounter problems with your submission file, your best course of action is to ask for advice on the challenge page.

If you are in the top 10 at the time the leaderboard closes, we will email you to request your code. On receipt of email, you will have 48 hours to respond and submit your code following the Reproducibility of submitted code guidelines detailed below. Failure to respond will result in disqualification.

If your solution places 1st, 2nd, or 3rd on the final leaderboard, you will be required to submit your winning solution code to us for verification, and you thereby agree to assign all worldwide rights of copyright in and to such winning solution to Zindi.

If two solutions earn identical scores on the leaderboard, the tiebreaker will be the date and time in which the submission was made (the earlier solution will win).

The winners will be paid via bank transfer, PayPal if payment is less than or equivalent to $100, or other international money transfer platform. International transfer fees will be deducted from the total prize amount, unless the prize money is under $500, in which case the international transfer fees will be covered by Zindi. In all cases, the winners are responsible for any other fees applied by their own bank or other institution for receiving the prize money. All taxes imposed on prizes are the sole responsibility of the winners. The top winners or team leaders will be required to present Zindi with proof of identification, proof of residence and a letter from your bank confirming your banking details. Winners will be paid in USD or the currency of the challenge. If your account cannot receive US Dollars or the currency of the challenge then your bank will need to provide proof of this and Zindi will try to accommodate this.

Please note that due to the ongoing Russia-Ukraine conflict, we are not currently able to make prize payments to winners located in Russia. We apologise for any inconvenience that may cause, and will handle any issues that arise on a case-by-case basis.

Payment will be made after code review and sealing the leaderboard.

You acknowledge and agree that Zindi may, without any obligation to do so, remove or disqualify an individual, team, or account if Zindi believes that such individual, team, or account is in violation of these rules. Entry into this challenge constitutes your acceptance of these official challenge rules.

Zindi is committed to providing solutions of value to our clients and partners. To this end, we reserve the right to disqualify your submission on the grounds of usability or value. This includes but is not limited to the use of data leaks or any other practices that we deem to compromise the inherent value of your solution.

Zindi also reserves the right to disqualify you and/or your submissions from any challenge if we believe that you violated the rules or violated the spirit of the challenge or the platform in any other way. The disqualifications are irrespective of your position on the leaderboard and completely at the discretion of Zindi.

Please refer to the FAQs and Terms of Use for additional rules that may apply to this challenge. We reserve the right to update these rules at any time.

Consequences of breaking any rules of the challenge or submission guidelines:

  • First offence: No prizes for 6 months and 2000 points will be removed from your profile (probation period). If you are caught cheating, all individuals involved in cheating will be disqualified from the challenge(s) you were caught in and you will be disqualified from winning any challenges for the next six months and 2000 points will be removed from your profile. If you have less than 2000 points to your profile your points will be set to 0.
  • Second offence: Banned from the platform. If you are caught for a second time your Zindi account will be disabled and you will be disqualified from winning any challenges or Zindi points using any other account.

Teams with individuals who are caught cheating will not be eligible to win prizes or points in the challenge in which the cheating occurred, regardless of the individuals’ knowledge of or participation in the offence.

Teams with individuals who have previously committed an offence will not be eligible for any prizes for any challenges during the 6-month probation period.

Monitoring of submissions

  • We will review the top 10 solutions of every challenge when the challenge ends.
  • We reserve the right to request code from any user at any time during a challenge. You will have 24 hours to submit your code following the rules for code review (see above). Zindi reserves the right not to explain our reasons for requesting code. If you do not submit your code within 24 hours you will be disqualified from winning any challenges or Zindi points for the next six months. If you fall under suspicion again and your code is requested and you fail to submit your code within 24 hours, your Zindi account will be disabled and you will be disqualified from winning any challenges or Zindi points with any other account.
Prizes

1st prize:  500 CHF

2nd prize: 300 CHF

3rd prize: 200 CHF

There are 500 Zindi points available. You can read more about Zindi points here.

How to get started with Zindi

How to get started on Zindi

How to create a team on Zindi

How to run notebooks in Colab

How to update your profile