No postprocessing😅. Postprocessing undermines the trustworthiness evaluation. But you might definitely need more than a single model. Can't tell about top3. They are far away😅
3. Approach Reusability (max 100 words) (WG Modelling): Comment on how adaptable or reusable your model architecture and methods are for other natural disaster contexts or landslide events. Note any design choices you made to enhance flexibility and discuss any limitations to reusability you identified.
Main Focus on Reusability. "Some Postprocessing" are literally tailored to the dataset being used and therefore making the code not reusable.
I think I don't really understand why post processing is necessarily dataset tailored. Maybe you have an example how it could reduce reusability ?
here my example of how I use post processing: Since we are scored on F1 and handle imbalanced dataset (that occurs quite often) it is necessary to balance recall and precision. But so far I haven't heard of loss function that directly optimizes F1 score. So I use a post processing technique to correct my prediction to kind of 'align' with the F1 score.
Since that approach doesn't use any dataset-specific rule based system - I think it should be valid
Honestly, I’m also curious — if anyone here managed to break 90 with just a single model with postprocessing, please share your approach. I’m currently breaking my head trying to reach that level of accuracy! Any tips would be really appreciated.
I really do not know if this can be achieved with a single model, perhaps the people who have achieved this can help us out here.
"postprocessing" though?🤔
No postprocessing😅. Postprocessing undermines the trustworthiness evaluation. But you might definitely need more than a single model. Can't tell about top3. They are far away😅
Hey! What do you mean by: Postprocessing undermines the trustworthiness evaluation?
3. Approach Reusability (max 100 words) (WG Modelling): Comment on how adaptable or reusable your model architecture and methods are for other natural disaster contexts or landslide events. Note any design choices you made to enhance flexibility and discuss any limitations to reusability you identified.
Main Focus on Reusability. "Some Postprocessing" are literally tailored to the dataset being used and therefore making the code not reusable.
I think I don't really understand why post processing is necessarily dataset tailored. Maybe you have an example how it could reduce reusability ?
here my example of how I use post processing: Since we are scored on F1 and handle imbalanced dataset (that occurs quite often) it is necessary to balance recall and precision. But so far I haven't heard of loss function that directly optimizes F1 score. So I use a post processing technique to correct my prediction to kind of 'align' with the F1 score.
Since that approach doesn't use any dataset-specific rule based system - I think it should be valid
Exactly, that's a reusable post processing trick. I intentionally included "Some Postprocessing" tricks when explaining what I previously said
Honestly, I’m also curious — if anyone here managed to break 90 with just a single model with postprocessing, please share your approach. I’m currently breaking my head trying to reach that level of accuracy! Any tips would be really appreciated.
@Koleshjr, Is your results from a single model?
Yes
Nice!
It's tough to see the integrity of competition get undermined like way. I share your feeling that hardcoding the leaked values is just plain wrong.
Snow Rider
Also struggling to break 90.
Try more models
Trying to improve my best single model this weekend before trying ensembles.
Looks like @koleshjr has cracked the code 👀
Catch my next week streams, I will explain😅
my good model is inspired by your stream 😁
Wow Wow I have to try it too😅
@nymfree great to hear that bro 🤝
😄😄😄