Hi,
Congrats to everyone who participated in this challenge.
My approach to this challenge is very straightforward. I solved the problem as a classification one. I used HuggingFace transformer RoBERTa as the backbone. The classification head was a Linear layer with 3 units. I chose that configuration so that, even solving it as a classification problem, I'd still be able to output probabilities instead of plain Labels. And the last part was that, based on the output's argmax, I assign a coefficient of α {-1, 0, 1} to its probability value, following the fact that the argmax was resp. 0, 1, or 2.
Let's say for a single input, the model output : Tensor([[0.73, 0.12, 0.15]]). The argmax would be 0, which means that the tweet is classified as Negative. The coefficient for this output will be -1. Thus, my submission file will contain -0.73 (-1 * 0.73) for that input.
This competition was really cool, I hope to see more NLP challenges.
You'll find my solution here: https://github.com/NazarioR9/ToVaccineOrNotToVaccine
Great work, thank you for sharing.
Thanks!
Congratulations @Nazario, thanks a lot for sharing !!!
Congrats to you too, you smashed it. I think, instead of a averaging my predictions, I'll try your approach of blending with different models. Because I had my previous best score with Roberta Base.
I am sure, you will do great !!!