Primary competition visual

Makerere Fall Armyworm Crop Challenge

Helping Uganda
$1 000 USD
Challenge completed over 3 years ago
Classification
Computer Vision
701 joined
155 active
Starti
Apr 14, 22
Closei
Jul 23, 22
Reveali
Jul 23, 22
Updates
Connect Ā· 29 May 2022, 04:42 Ā· edited 6 minutes later Ā· 0

11:39pm Sat:

Found various notebooks and examples with fastai2. Also found the nolearn and Lasagne libraries.

1. I'm getting the following error while trying `from fastai2.vision.all import *` in the Potholes Tutorial FastAI2.ipynb from https://colab.research.google.com/drive/1RKijtQkz4jwIMUeBOOufgyBWni40ZjI1

"tensor.H is only supported on matrices (2-D tensors). Got 1-D tensor."

I've banged against that error coming from torch_core.py for a good chunk now. Colab runtime is GPU, I don't think thats the issue. StackOverflow and Google weren't super helpful with the error. Moving on and applying the techniques I found in the helpful NBs with different libraries, is there a preference that I use nolearn/Lasagne?

Regardless I'm still working at this. Thanks for providing feedback in a format that's familiar to me. I know writing all this documentation is a feat.

--------

And because I'm trying to communicate as much as possible through this I have responses to Tom Darling (Great Gatsby?):

I think I know the answer, but... Where does “Valid loss” come from? I think it is simply a cross-validation (perhaps calling sklearn, cv=5) against the training data set. But, perhaps, Lasagne separates out a validation data set from the original training set, in which case the training set is smaller than what I sent. Just curious, because I’m holding back a validation data set that I could send to the Lasagne trainer.

By 'Valid Loss' i assume you're reffering to the visualization cells.

The short answer is that yes it is a cross validation thing, and also an ugly copy/paste from TensorFlow's image classification tutorial. When I looked at history.history it was a dict object that looked like { accuracy: [...], lr: [...], loss: [...] }. Either the history API changed, or their optimizers/loss functions don't have the same properties (they use ADAM and SparseCategoricalCrossentropy).

For the longer answer i'm drawing from Introduction to Data Mining by Tan, Steinback, and Kumar (2016, pg 184):

Using a validation set: In this approach, instead of using the training set to estimate the generalization error, the original training data is divided into 2 smaller subsets. One of the subsets is used for training, while the other, known as the validation set, is used for estimating the generalization error.

e.g. tutorial chart: https://www.tensorflow.org/tutorials/images/classification_files/output_dduoLfKsZVIA_0.png

Discussion 0 answers