"Participants should note that while the polygon annotations are carefully crafted, some may be imprecise. It is up to you to decide on the best strategy to handle such cases in your model." I don't care how hard they tried to curate the dataset; the result is evidently quite bad. Nowhere in "Info" or "Data" do they rule out manually fixing the poor annotations. As long as you don't ever touch the *test* images and your model only ever encounters the *test* images at inference for making submission, it should be allowed to improve the *training* dataset as one "strategy to handle [poor annotation] cases".
Annotating the test images and training your model on them should obviously be regarded as cheating.
"Participants should note that while the polygon annotations are carefully crafted, some may be imprecise. It is up to you to decide on the best strategy to handle such cases in your model." I don't care how hard they tried to curate the dataset; the result is evidently quite bad. Nowhere in "Info" or "Data" do they rule out manually fixing the poor annotations. As long as you don't ever touch the *test* images and your model only ever encounters the *test* images at inference for making submission, it should be allowed to improve the *training* dataset as one "strategy to handle [poor annotation] cases".
Annotating the test images and training your model on them should obviously be regarded as cheating.