I just now sort of understood how to approach this competition and I'm wondering what F1 scores you guys are getting in training. The maximum I got so far is 0.4.
well I am getting f1 scores close to 0.83 but It does not translate well to the lb. I think a much better indicator would be to change the metric during training to mean_absolute error(by counting the buildings) instead of the f1? and use that ?? I don't know. That is my current idea since the current f1 route is not leading me to reliable scores.
Impressive that you get such good F1 scores. Nothing I tried seems to work well so far. Just running the starter notebook now and see what falls out of it.
It could also be that the test data is significantly different from training data. F1 and local validation metric look good on xview data but the LB is another story.
Test data appears to be drone images vs the satellite.
well I am getting f1 scores close to 0.83 but It does not translate well to the lb. I think a much better indicator would be to change the metric during training to mean_absolute error(by counting the buildings) instead of the f1? and use that ?? I don't know. That is my current idea since the current f1 route is not leading me to reliable scores.
Impressive that you get such good F1 scores. Nothing I tried seems to work well so far. Just running the starter notebook now and see what falls out of it.
Will attempt yolo afterwards
Might be I am doing something wrong also so i will relook my approach again
It could also be that the test data is significantly different from training data. F1 and local validation metric look good on xview data but the LB is another story.
Test data appears to be drone images vs the satellite.