So far, gradient boosting trees gave the best results for me. For me LightGBM didn't do well as I expected. If LightGBM gave you the best results, please share your score. And also has anyone used tabnet or any other deep learning model for this competition?
I do not think deep learning models will be suited due to interpretability concerns, unless it's not really an issue.
Maybe Catboost would do better than lightgbm
CATBOOST CV: 7.3999, LB: 7.6292
Yours is really correlating
Yes, it is.
LGBM CV: 7.35, LB: 6.06
Haven't tried any other model yet. And additional data did not help.
Did you hyperparameter tune it because mine is that bad. And also the additional data actually worked for me. I think you will have to look at it again.
no HP tuning as of now, just early stopping with 0.03 LR.
Nice to know additional data is useful :)
which is your current best model and CV?
Mine is catboost. 7.71 CV and 5.876 LB. I am thinking about it. I feel I am doing something wrongly😅. Because my CV is that bad. I might be overfitting
Nice. Honestly all of us might be overfitting on this data, have a look at your train and validation MAEs :)
also, it is interesting as there is a 3.xx on LB. Anyway, all the best :)
All the best to you too,
If you don't do features, what is the best effect of single-mode?
Hello! If you don't mind could you please give me some idea how you guys have approached this problem. i will be grateful to you!
I don't really get the question. Can you please rephrase it? Because we definitely deal with features.
Challenge organisers have given a notebook file for reference , you can have idea from there for your first submission
Already gone through it.
00