Microsoft Rice Disease Classification Challenge
Can you identify disease in images of rice grown in Egypt?
Prize
$3 000 USD
Time
Ended 6 months ago
Participants
261 active · 833 enrolled
Helping
Egypt
Intermediate
Computer Vision
Classification
Agriculture
GradCAM - Playing with interpretability
Data · 18 May 2022, 14:01 · 3

I found it interesting to inspect which parts of the image were most responsible for specific predictions. These are from a not-particularly-good model, and in some places, you can see it seems to focus on the ground/water or other seemingly arbitrary things. Not a perfect method, but interesting and I think something we should be doing.

For those who'd like to try it, I used https://github.com/Synopsis/amalgam with:

pip install fastai-amalgam

from fastai_amalgam.interpret.all import *

learn.gradcam('images/id_004wknd7qd.jpg')

My subjective impression based on a few different models is that they tend to focus on the parts I'd expect at least some of the time, but occasionally seem to derive some information from patches of ground or something. Perhaps the data was collected from fields that are distinguishable by soil type or something? Or I'm just reading too much into a blurry blob :) Anyway, I hope you find this interesting. Good luck all!

Discussion 3 answers

Interesting Insights @Johnowhitaker, thanks for this.

18 May 2022, 15:53
Upvotes 1

Thanks for the sharing

18 May 2022, 22:19
Upvotes 1

Interesting👍

10 Jun 2022, 17:46
Upvotes 0