Primary competition visual

GIZ NLP Agricultural Keyword Spotter

Helping Uganda
$7 000 USD
Completed (over 5 years ago)
Classification
Automatic Speech Recognition
Natural Language Processing
739 joined
253 active
Starti
Sep 11, 20
Closei
Nov 29, 20
Reveali
Nov 29, 20
Evaluation metric
Help · 30 Oct 2020, 12:59 · edited ~6 hours later · 3

Hi,

I am wondering how zindi is evaluating the model. Given the example of zindi :

fn Pump Spinach abalimi

audio_files/009WL0S.wav 0.73 0.19 0.01

audio_files/00AH117.wav 0.03 0.45 0.99

Let's say the true label is Pump and abalimi for the 1st and 2nd item.

Are we computing the logloss for each class and then average all of them ? So we end up with 193 logloss then average them ?

@zindi

Discussion 3 answers
User avatar
Insat

I could be wrong but I think that if the evaluation was class wise, it needs to be declared in the evaluation part which is not the case.

30 Oct 2020, 14:57
Upvotes 0

yes but as there are no example I am not sure how they compute it. Can @zindi explain how they evaluate an item ?

User avatar
Amy_Bray
Zindi
9 Nov 2020, 14:46
Upvotes 0