Another fun filled weekend comes to an end with Zindi weekend hackathons. I have joined recently and enjoying solving real world problems on Zindi. This is what differenaties Zindi from other.
Please find my rank#10 solution https://github.com/chetanambi/Zindi-Solutions/tree/master/To%20Vaccinate%20or%20Not%20to%20Vaccinate . My final score is ensemble of top 4 models.
Requesting top 5 winners to share their solution so we all can learn from your experience.
Cheers !
Thanks for posting. Nice elegant solution
Thank you for contributing and sharing your solution! A question and a tip:
- Q: How did you pick your chosen learning rate and the number of epochs? Any tricks, or just hand-picked due to time constraints?
- Tip: simpletransformers has great integration with Weights & Biases - if you're doing more with NLP going forward it's a nice easy way to track training progress and experiments. Also kind of addictive watching the loss change over time ;)
Also congrats - I see you're way up on a lot of leaderboards in the short time you've been on here. Nice work!!!
I did try with default values first and then changing few of the parameters manually. In my case only increasing no. of epoch helped to improve the score. Due to time-constraint and limited availability of GPU on Colab I missed to try couple of ideas. Strangely Colab was getting disconnected after 30mins may be because I have used lot of GPU over the weekend :) Kaggle I was running into error when using transformers. Would love to see you solution!
Thanks for that tip !!.
thanks so much @chetan_Ambi have always learn a lot from his sharing
"Sharing is Caring" @Nasirudeen Raheem :) Its always good practice to learn from winner's solution. I do that a lot too.
Thanks @Chetan_Ambi. sweeeet ;D
#45 rank solution. Using Fast.ai https://github.com/anindabitm/Zindi_hack/blob/master/Zindi-hack-4-nlp-vaccine.ipynb
Could not use simpletransformers on kaggle. Was giving error on importing and Colab was already heavily used up. Congratulations to winners.
Thanks for sharing @Chetan_Ambi and @aninda_bitm
Mine was a simple bag-of-words approach and an xgboost model (0.571 rmse and #34 position). Word embeddings with an RNN performed worse
https://github.com/steph-en-m/competitive-programming/tree/master/Tweet_classification