The dataset consists of field images captured from bean breeding plots under real agricultural conditions. Images were collected to support automated phenotyping of flowering intensity.
Each image represents a defined test area within a breeding plot.
Participants are provided with a labeled dataset of bean plant images annotated for flower detection. The goal is to train models that can accurately localize flower instances in images.
Each image is identified by a unique Image_ID.
Images are high-resolution, with dimensions provided via:
Multiple objects (flowers) may appear in a single image.
The training data is provided in a tabular format, where each row corresponds to one annotated object instance. Each object is assigned a label indicating its flowering stage.
In addition to class labels, participants should interpret each annotated object as a distinct instance with a corresponding spatial extent. This challenge is designed for instance-level understanding, meaning that each flower or plant must be treated as an individual object rather than as part of a single aggregated region.
While bounding box coordinates are provided in the training data, participants are expected to develop models that can capture pixel-level object boundaries (instance segmentation) to improve localization and separation of overlapping or closely spaced objects.
Images can be downloaded from this page or through this link:
https://storage.googleapis.com/bean-flowering/images.zip