If you mean using predictions to label Luo or Setswana data, I think that's fine.
I tried that earlier and it didn't help much. I think it's better than POS placeholders like 'X', but you're still only getting like 40-50% accuracy on the labels, so it's not a game changer.
I've pivoted to training adapters on monolingual text without labels at all, as described in the paper. But it won't do it with 16GB of VRAM on Colab though, so I'm maybe out of the comp now unless i can work that out, or make a plan to use a bigger GPU.
Keep your mind open, Whatever GPU works here. You are only out of the competition if you give up, never before. The score my team is getting doesn't use "training adapters on monolingual text without labels at all" or any similar technique, and doesn't need more than 16GB of RAM.
We also don't probe and it's very unlikely we are overfitting, you still have massive room for improvement from 0.488. Using Kaggle NBs is free and you can claim top 5 with it no doubt.
If you mean using predictions to label Luo or Setswana data, I think that's fine.
I tried that earlier and it didn't help much. I think it's better than POS placeholders like 'X', but you're still only getting like 40-50% accuracy on the labels, so it's not a game changer.
I've pivoted to training adapters on monolingual text without labels at all, as described in the paper. But it won't do it with 16GB of VRAM on Colab though, so I'm maybe out of the comp now unless i can work that out, or make a plan to use a bigger GPU.
thanks for the confirmation
Hey db and Ashuma,
Keep your mind open, Whatever GPU works here. You are only out of the competition if you give up, never before. The score my team is getting doesn't use "training adapters on monolingual text without labels at all" or any similar technique, and doesn't need more than 16GB of RAM.
We also don't probe and it's very unlikely we are overfitting, you still have massive room for improvement from 0.488. Using Kaggle NBs is free and you can claim top 5 with it no doubt.
thanks, looks like i was trying too hard. bumped up to .59 with just some more fine tuning. just need another... 13% to go!
Good to hear it. :)