Okay guys I think post processing might be the key to LB improvement what 's your opinion ,
Did you apply a postprocessing trick to your subs?
Yea I did a single post processing which improved my score from 6.38 to 6.18
Wow that's a boost. I tried the multiplier trick but it just boosted me a little. Are you using gbdt models?
Yes I am only using CatBoost, I think you should understand the dynamics of your data for some post processing to work
Have you tried lightgbm? To me lightgbm is performing more than catboost
I'm also impressed you used catboost to reach that score. Big ups!
Yea I did a lot of feature engineering, feature engineering worked it got me top 20 , lightgbm didn't work very well for me I am not good at parameter tuning
You don't really have to tune the lgbm parameters here. I even used an early stopping of 5 and just 1000 n_estimators.
Wow , for me the score was >7 , used 1000 n_estimators no early stopping, maybe I was overfitting
Definitely, when I tried with even 10 early stopping, I got a worse score. Can't tell though, we are just public board probing🤣🤣
🤣🤣 that's a fact
I didn't do any validation, I lost a lot of signal during splitting so am actually relying on lb so far @CodeJoe
Did you apply a postprocessing trick to your subs?
Yea I did a single post processing which improved my score from 6.38 to 6.18
Wow that's a boost. I tried the multiplier trick but it just boosted me a little. Are you using gbdt models?
Yes I am only using CatBoost, I think you should understand the dynamics of your data for some post processing to work
Have you tried lightgbm? To me lightgbm is performing more than catboost
I'm also impressed you used catboost to reach that score. Big ups!
Yea I did a lot of feature engineering, feature engineering worked it got me top 20 , lightgbm didn't work very well for me I am not good at parameter tuning
You don't really have to tune the lgbm parameters here. I even used an early stopping of 5 and just 1000 n_estimators.
Wow , for me the score was >7 , used 1000 n_estimators no early stopping, maybe I was overfitting
Definitely, when I tried with even 10 early stopping, I got a worse score. Can't tell though, we are just public board probing🤣🤣
🤣🤣 that's a fact
I didn't do any validation, I lost a lot of signal during splitting so am actually relying on lb so far @CodeJoe