Hi everyone,
I found the challenge fascinating, but I struggled a lot with the issue of class imbalance (tried undersampling, oversampling, both, feature engineering, and setting class weights while fitting...). Despite all these efforts, I couldn’t push my score past 0.68, and my models kept overfitting. Now that the challenge is over, I’m really curious to learn how the top participants managed to achieve their impressive scores.
How did you handle class imbalance? What kind of feature engineering did you use? And which models worked best for you—GBT models or something else?
@Yisakberhanu @bentley @the_specialist @VincentSchuler @fristskill @marching_learning ...
If anyone is willing to share their methods, tips, notebook, or key insights, I’d truly appreciate it. It would be great to learn from your experiences and improve for future challenges.
Thanks in advance, and congrats to all the top performers!
You're spot on. Really want to also understand what was done to get those high scores..