Thanks for sharing your solution — it was genuinely impressive to see the level of thought and execution behind it, especially considering you ranked 16th in the DigiCow Challenge. That’s a serious achievement.
I was wondering, did you experiment with survival analysis at any point? If so, what led you to settle on classical classification (catboost) instead of the previously mentioned approach?
To be honest, I didn’t experiment with survival analysis during the competition. I approached the problem directly as three probabilistic classification tasks for the 7, 90, and 120-day adoption windows.
The competition framing made that approach quite natural since the targets were already defined as binary outcomes within fixed time horizons, and the evaluation metric (log loss / ROC-AUC) aligns very well with standard probabilistic classifiers.
Because of that, I focused most of my effort on feature engineering and iterative modeling with tree-based methods, and CatBoost worked particularly well given the tabular structure and categorical variables in the dataset.
Thanks for sharing 🙏
Hey Koleshjr,
Thanks for sharing your solution — it was genuinely impressive to see the level of thought and execution behind it, especially considering you ranked 16th in the DigiCow Challenge. That’s a serious achievement.
I was wondering, did you experiment with survival analysis at any point? If so, what led you to settle on classical classification (catboost) instead of the previously mentioned approach?
Hi, Thank you, I really appreciate that!
To be honest, I didn’t experiment with survival analysis during the competition. I approached the problem directly as three probabilistic classification tasks for the 7, 90, and 120-day adoption windows.
The competition framing made that approach quite natural since the targets were already defined as binary outcomes within fixed time horizons, and the evaluation metric (log loss / ROC-AUC) aligns very well with standard probabilistic classifiers.
Because of that, I focused most of my effort on feature engineering and iterative modeling with tree-based methods, and CatBoost worked particularly well given the tabular structure and categorical variables in the dataset.
thank you for sharing
Thank you for sharing