Second Thanks for sharing your solution but I find it as just a normal guide to train a YOLO model. You haven't told us the tricks that led you to getting an impressive score. The only thing I have seen is running WBF using a single model during inference and the cross validation choice where you included all minority classes into the training folds. But you have not gone in depth into the configurations that worked for example the thresholds used in inference both in the model and the wbf e.t.c. I would appreciate if you told us the trickss you used to get that impresssive score.
@Koleshjr You're welcome. Anyway, all parameters used for wbf might not be provided or shared but the major trick lies here below if you can take your time to read through carefully:
Filtering the Data:
The dataset is divided based on the number of samples per class: Classes with fewer than 1000 samples are isolated into a separate DataFrame for special handling. These might be minority classes that need to be included in all training folds to ensure the model sees enough examples of them. The remaining classes (those with more than 1000 samples) are kept as the primary dataset for cross-validation.
Thanks @MICADEE - any reason why the code is not shared? Your readme looks like it is missing the juicy details. Of course, I also understand if you want to keep those to yourself.
@nymfree Not really, but readme was tagged "Winning Solution Detailed Breakdown", purposely to understand all the ideas behind this solution. And again read through under this subtopic: "Filtering the Data" inside the shared solution to understand the major trick (or juicy details) behind this solution.
Cheers !!!
Nice, it looks exactly the same as 5th place : the hyperparamters and the model (AdamW, 'lr0': 3e-4, 50 epochs, image size 1024, close_mosaic=30) and offline augmentation.
Great, I really appreciate this. Congratulations once again. 🔥
@CodeJoe You're welcome.
First congratulations on your impressive result!
Second Thanks for sharing your solution but I find it as just a normal guide to train a YOLO model. You haven't told us the tricks that led you to getting an impressive score. The only thing I have seen is running WBF using a single model during inference and the cross validation choice where you included all minority classes into the training folds. But you have not gone in depth into the configurations that worked for example the thresholds used in inference both in the model and the wbf e.t.c. I would appreciate if you told us the trickss you used to get that impresssive score.
@Koleshjr You're welcome. Anyway, all parameters used for wbf might not be provided or shared but the major trick lies here below if you can take your time to read through carefully:
Filtering the Data:
The dataset is divided based on the number of samples per class: Classes with fewer than 1000 samples are isolated into a separate DataFrame for special handling. These might be minority classes that need to be included in all training folds to ensure the model sees enough examples of them. The remaining classes (those with more than 1000 samples) are kept as the primary dataset for cross-validation.
Okay thanks
@Koleshjr You're welcome.
Thanks @MICADEE - any reason why the code is not shared? Your readme looks like it is missing the juicy details. Of course, I also understand if you want to keep those to yourself.
@nymfree Not really, but readme was tagged "Winning Solution Detailed Breakdown", purposely to understand all the ideas behind this solution. And again read through under this subtopic: "Filtering the Data" inside the shared solution to understand the major trick (or juicy details) behind this solution. Cheers !!!
Nice, it looks exactly the same as 5th place : the hyperparamters and the model (AdamW, 'lr0': 3e-4, 50 epochs, image size 1024, close_mosaic=30) and offline augmentation.
congrats again for your great achievment