Hi please on the evaluation page it says
The error metric for this competition is the F1 score, which ranges from 0 (total failure) to 1 (perfect score). Hence, the closer your score is to 1, the better your model.
but from various searches treating the f1_score as a error requires f1_score_error =1-f1_score
it stands to reason it may be an evaluation metric instead since we're interested in maximizing the f1_score
can someone please clarifiy?..