In Barbados, all survey plans must be submitted in analog form to the Chief Surveyor's Office in the Lands and Surveys Department. Currently, these analog documents are manually captured to be entered into Barbados’ Survey Plan Register, a process which is costly in terms of time and manpower, and can result in errors in data capture.
Your challenge: build a solution that can automatically detect and extract land plot shapes from scanned survey plans, and convert them into standard digital formats like shapefiles. On top of that, you'll also need to pull out important metadata like the lot number, parcel area, and surveyor name, using OCR or other machine learning techniques. The extracted digital data will then be in a format that can easily be entered into a digital registry.
This is a pilot challenge, focused on just one district in Barbados, but the winning models will be put into practice for the whole country. Your models will help modernise how land records are handled, saving time and money, reducing errors, and unlocking new social and economic benefits for all on the island.
About Lands and Surveys Department
The Lands and Surveys Department is on a trajectory to provide robust and contemporary services to the public of Barbados. We can transform our country by seeking after high innovative standards of operation. - Mr. David McCollin
The Lands and Surveys Departments' vision is to make the Lands and Surveys Department the hub of all surveying, mapping and geospatial services; to be the national surveying agency of Government; coordinating all surveying services within the Ministry of Housing, Lands and Maintenance, thus strengthening the base of surveying expertise in Government.
This challenge uses multi-metric evaluation. There are three error metrics: Word Error Rate (WER), Multi-Column Accuracy (MCA) and Intersection over Union Polygon (IoU Polygon).
To perform well in this challenge, your solution needs to be both correct in what it predicts (Accuracy and WER) and precise in the shapes it draws (IoU Polygon).
The final score on the leaderboard is the weighted mean of the three evaluation metrics.
Metric Weighting IoU Polygon 0.5 WER 0.2 MCA 0.3
WER is calculated from column TargetSurvey. This includes fields Land Surveyor, Surveyed For and Address.
Accuracy is calculated as the average accuracy across Certified date, Total Area, Unit of Measurement, Parish, LT Num
IoU Polygon is how well your polygon overlaps with the reference polygon. Perfect overlap will return a score of 1, whereas no overlap whatsoever returns a score of 0.
Your submission file needs to follow the SampleSubmission.csv file exactly, especially the order and casing of the headers. Please note the first 9 columns must be a string; if you predict a NaN value please fill in the value with “”.
ID TargetSurvey Certified date Total Area Unit of Measurement Parish LT Num geometry
7703-078 andre clarke d & a developers ltd lot 1 foul bay 2013-11-22 411.0 sq m St. Philip 77.03.08.014 "[(40621.893100373105, 66595.8724605032), ....]"
During code review, the top 10 solutions on the private leaderboard will be re-trained on the full train and test sets (with the answers) and then inferred on 20 completely unseen plans. Final placements for prize winners will be based on your overall score on the hidden test set, and will supersede your private leaderboard ranking.
Read this article on how to prepare your documentation and this article on how to ensure a successful code review.
Below are two functions that will help you prepare your submission and evaluate your results correctly.
1. How to Create the TargetSurvey Field
The TargetSurvey column is created by concatenating Land Surveyor, Surveyed For, and Address (in that order), all lowercased, with punctuation removed and whitespace normalized. This ensures fair WER evaluation and standardizes the format.
import re
import pandas as pd
def clean_target_survey(text: str) -> str:
"""Lowercase, remove periods and commas, normalize spaces."""
text = text.lower()
text = re.sub(r"[.,]", " ", text) # remove periods and commas
text = re.sub(r"\s+", " ", text) # normalize multiple spaces
return text.strip()
def format_dataset(df: pd.DataFrame) -> pd.DataFrame:
"""
Adds TargetSurvey and keeps only required columns.
Applies lowercase, removes ., , and normalizes spaces.
"""
df["TargetSurvey"] = (
df["Land Surveyor"].astype(str).str.strip() + " " +
df["Surveyed For"].astype(str).str.strip() + " " +
df["Address"].astype(str).str.strip()
).apply(clean_target_survey)
columns_to_keep = [
'ID', 'TargetSurvey', 'Certified date', 'Total Area',
'Unit of Measurement', 'Parish', 'LT Num', 'geometry',
]
return df[columns_to_keep]2. How the geometry Field is Used for IoU Evaluation
For IoU Polygon calculation, you’ll typically want to convert this polygon to a binary mask (e.g., for comparing ground-truth vs prediction). The function below demonstrates how to convert a shapely polygon to a 2D binary mask suitable for pixel-based IoU calculation:
import numpy as np
import cv2
def polygon_to_mask(polygon, width=256, height=256):
"""
Convert a polygon to a binary mask array, ensuring correct orientation.
Parameters:
polygon: shapely Polygon object
width, height: dimensions of the output mask
Returns:
numpy array (binary mask)
"""
if polygon is None or polygon.is_empty:
return np.zeros((height, width), dtype=np.uint8)
# Get bounds of the polygon
minx, miny, maxx, maxy = polygon.bounds
# Get polygon exterior coordinates
coords = np.array(polygon.exterior.coords)
# Transform coordinates to pixel space
x_scale = width / (maxx - minx) if maxx != minx else 1
y_scale = height / (maxy - miny) if maxy != miny else 1
# Convert to pixel coordinates (flip y)
pixel_coords = np.array([
[(x - minx) * x_scale, (maxy - y) * y_scale]
for x, y in coords
], dtype=np.int32)
# Create empty mask
mask = np.zeros((height, width), dtype=np.uint8)
# Fill polygon using OpenCV
cv2.fillPoly(mask, [pixel_coords], 1)
return mask1st place: $5 000 USD
2nd place: $3 000 USD
3rd place: $2 000 USD
There are 10 000 Zindi points available. You can read more about Zindi points here.
🚀 What to know to get started with Zindi Challenges
How to get started on Zindi
How to create a team on Zindi
How to run notebooks in Colab
How to update your profile
This dataset is provided solely for the purpose of participating in the Barbados Lands and Surveys Plot Automation Challenge hosted on Zindi. Use of this dataset outside of this scope is strictly prohibited.
You may not copy, distribute, transmit, publish, or use this dataset for any other research, commercial, educational, or public purpose. This includes, but is not limited to, uploading to public repositories or using for other competitions.
Violation of this license may result in disqualification and potential legal action.
ENTRY INTO THIS CHALLENGE CONSTITUTES YOUR ACCEPTANCE OF THESE OFFICIAL CHALLENGE RULES.
This challenge is open to all.
Teams and collaboration
You may participate in challenges as an individual or in a team of up to four people. When creating a team, the team must have a total submission count less than or equal to the maximum allowable submissions as of the formation date. A team will be allowed the maximum number of submissions for the challenge, minus the total number of submissions among team members at team formation. Prizes are transferred only to the individual players or to the team leader.
Multiple accounts per user are not permitted, and neither is collaboration or membership across multiple teams. Individuals and their submissions originating from multiple accounts will be immediately disqualified from the platform.
Code must not be shared privately outside of a team. Any code that is shared, must be made available to all challenge participants through the platform. (i.e. on the discussion boards).
The Zindi data scientist who sets up a team is the default Team Leader but they can transfer leadership to another data scientist on the team. The Team Leader can invite other data scientists to their team. Invited data scientists can accept or reject invitations. Until a second data scientist accepts an invitation to join a team, the data scientist who initiated a team remains an individual on the leaderboard. No additional members may be added to teams within the final 5 days of the challenge or last hour of a hackathon.
The team leader can initiate a merge with another team. Only the team leader of the second team can accept the invite. The default team leader is the leader from the team who initiated the invite. Teams can only merge if the total number of members is less than or equal to the maximum team size of the challenge.
A team can be disbanded if it has not yet made a submission. Once a submission is made individual members cannot leave the team.
All members in the team receive points associated with their ranking in the challenge and there is no split or division of the points between team members.
Datasets, packages and general principles
The solution must use publicly-available, open-source packages only.
You may use only the datasets provided for this challenge.
You may use pretrained models as long as they are openly available to everyone.
Automated machine learning tools such as automl are not permitted.
If the error metric requires probabilities to be submitted, do not set thresholds (or round your probabilities) to improve your place on the leaderboard. In order to ensure that the client receives the best solution Zindi will need the raw probabilities. This will allow the clients to set thresholds to their own needs.
This dataset is provided solely for the purpose of participating in the Barbados Lands and Surveys Plot Automation Challenge hosted on Zindi. Use of this dataset outside of this scope is strictly prohibited.
You may not copy, distribute, transmit, publish, or use this dataset for any other research, commercial, educational, or public purpose. This includes, but is not limited to, uploading to public repositories or using for other competitions.
Violation of this license may result in disqualification and potential legal action.You must notify Zindi immediately upon learning of any unauthorised transmission of or unauthorised access to the challenge data, and work with Zindi to rectify any unauthorised transmission or access.
Your solution must not infringe the rights of any third party and you must be legally entitled to assign ownership of all rights of copyright in and to the winning solution code to Zindi.
Submissions and winning
You may make a maximum of 10 submissions per day.
You may make a maximum of 300 submissions for this challenge.
Before the end of the challenge you need to choose 2 submissions to be judged on for the private leaderboard. If you do not make a selection your 2 best public leaderboard submissions will be used to score on the private leaderboard.
During the challenge, your best public score will be displayed regardless of the submissions you have selected. When the challenge closes your best private score out of the 2 selected submissions will be displayed.
Zindi maintains a public leaderboard and a private leaderboard for each challenge. The Public Leaderboard includes approximately 20% of the test dataset. While the challenge is open, the Public Leaderboard will rank the submitted solutions by the accuracy score they achieve. Upon close of the challenge, the Private Leaderboard, which covers the other 80% of the test dataset, will be made public and will constitute the final ranking for the challenge.
During code review, the top 10 solutions on the private leaderboard will be re-trained on the full train and test sets (with the answers) and then inferred on 20 completely unseen plans. Final placements for prize winners will be based on your overall score on the hidden test set, and will supersede your private leaderboard ranking.
Note that to count, your submission must first pass processing. If your submission fails during the processing step, it will not be counted and not receive a score; nor will it count against your daily submission limit. If you encounter problems with your submission file, your best course of action is to ask for advice on the challenge page.
If you are in the top 10 at the time the leaderboard closes, we will email you to request your code. On receipt of email, you will have 48 hours to respond and submit your code following the Reproducibility of submitted code guidelines detailed below. Failure to respond will result in disqualification.
If your solution places 1st, 2nd, or 3rd on the final leaderboard, you will be required to submit your winning solution code to us for verification, and you thereby agree to assign all worldwide rights of copyright in and to such winning solution to Zindi.
If two solutions earn identical scores on the leaderboard, the tiebreaker will be the date and time in which the submission was made (the earlier solution will win).
The winners will be paid via bank transfer, PayPal if payment is less than or equivalent to $100, or other international money transfer platform. International transfer fees will be deducted from the total prize amount, unless the prize money is under $500, in which case the international transfer fees will be covered by Zindi. In all cases, the winners are responsible for any other fees applied by their own bank or other institution for receiving the prize money. All taxes imposed on prizes are the sole responsibility of the winners. The top winners or team leaders will be required to present Zindi with proof of identification, proof of residence and a letter from your bank confirming your banking details. Winners will be paid in USD or the currency of the challenge. If your account cannot receive US Dollars or the currency of the challenge then your bank will need to provide proof of this and Zindi will try to accommodate this.
Please note that due to the ongoing Russia-Ukraine conflict, we are not currently able to make prize payments to winners located in Russia. We apologise for any inconvenience that may cause, and will handle any issues that arise on a case-by-case basis.
Payment will be made after code review and sealing the leaderboard.
You acknowledge and agree that Zindi may, without any obligation to do so, remove or disqualify an individual, team, or account if Zindi believes that such individual, team, or account is in violation of these rules. Entry into this challenge constitutes your acceptance of these official challenge rules.
Zindi is committed to providing solutions of value to our clients and partners. To this end, we reserve the right to disqualify your submission on the grounds of usability or value. This includes but is not limited to the use of data leaks or any other practices that we deem to compromise the inherent value of your solution.
Zindi also reserves the right to disqualify you and/or your submissions from any challenge if we believe that you violated the rules or violated the spirit of the challenge or the platform in any other way. The disqualifications are irrespective of your position on the leaderboard and completely at the discretion of Zindi.
Please refer to the FAQs and Terms of Use for additional rules that may apply to this challenge. We reserve the right to update these rules at any time.
Reproducibility of submitted code
If your submitted code does not reproduce your score on the leaderboard, we reserve the right to adjust your rank to the score generated by the code you submitted.
If your code does not run you will be dropped from the top 10. Please make sure your code runs before submitting your solution.
Always set the seed. Rerunning your model should always place you at the same position on the leaderboard. When running your solution, if randomness shifts you down the leaderboard we reserve the right to adjust your rank to the closest score that your submission reproduces.
Custom packages in your submission will not be accepted.
All data manipulation must be done in code, manual manipulation via manual labelling or Excel will lead to disqualification.
You may only use tools available to everyone i.e. no paid services or free trials that require a credit card.
Read this article on how to prepare your documentation and this article on how to ensure a successful code review.
Consequences of breaking any rules of the challenge or submission guidelines:
Teams with individuals who are caught cheating will not be eligible to win prizes or points in the challenge in which the cheating occurred, regardless of the individuals’ knowledge of or participation in the offence.
Teams with individuals who have previously committed an offence will not be eligible for any prizes for any challenges during the 6-month probation period.
Monitoring of submissions
Join the largest network for
data scientists and AI builders