Is there anybody who improves LB score with rgn images? In my experience, using rgn image solely makes LB worse so after that, I don't consider to use rgn images.. but how about you guys..?
Not helpful to just mix RGN with the RGB images, and they give a much worse result if you train just on RGN. What I think would be useful is to build a network with two inputs (RGB and RGN) that go through 2 separate convnets (or another kind of feature extractor), and then you concatenate both outputs before passing them through the classification head.
Might be useful to pre-fine-tune both convnets separately to classify RGB and RGN images and then use the weights in the new network.
Just some ideas, I have not tried that myself, and you def can get pretty high on LB by just classifying RGB images and ignoring the RGN ones.
I've been doing exactly that - have siamese network that takes image on each side and concatenate output. I've initially used two mobilenets that later become one at the dense layer but now use just my own very small net.
Initially I used two similar but separate nets and then the RGB did a little better.
Although I did not do that, I think you could perhaps then increase training sample by swapping images in a single class rather than use the same (aligned) rgb and rgn image for each input.
Augmentation on RGN is tricky, and the size differ quite a bit - just some practical issues you need to consider when you do this.
Not helpful to just mix RGN with the RGB images, and they give a much worse result if you train just on RGN. What I think would be useful is to build a network with two inputs (RGB and RGN) that go through 2 separate convnets (or another kind of feature extractor), and then you concatenate both outputs before passing them through the classification head.
Might be useful to pre-fine-tune both convnets separately to classify RGB and RGN images and then use the weights in the new network.
Just some ideas, I have not tried that myself, and you def can get pretty high on LB by just classifying RGB images and ignoring the RGN ones.
Great Idea! Thank you.
I've been doing exactly that - have siamese network that takes image on each side and concatenate output. I've initially used two mobilenets that later become one at the dense layer but now use just my own very small net.
Initially I used two similar but separate nets and then the RGB did a little better.
Although I did not do that, I think you could perhaps then increase training sample by swapping images in a single class rather than use the same (aligned) rgb and rgn image for each input.
Augmentation on RGN is tricky, and the size differ quite a bit - just some practical issues you need to consider when you do this.