My LB scores seem to be no better than random guesses for NN or GBT models, despite local AUC CV being 0.81 for NN and 0.76 for GBT. Has anyone else experienced this?
Basically after making predictions, assuming all your predictions are in a list, do:
df = pd.read_csv('id_map.csv')
my_dict = {}
for i,row in df.iterrows():
my_dict[row['id']] = row['ID']
sub = pd.read_csv('SampleSubmission.csv')
for i in range(len(sub)):
sub.at[i,'class'] = test_predictions[my_dict[sub.at[i,'id']]]
Facing same problem. Anyone here who can explain this
Figured it out. you have to use the provided id_map file
I don't understand what you said. Can you explain
Can you share any sample notebook?
Basically after making predictions, assuming all your predictions are in a list, do:
df = pd.read_csv('id_map.csv') my_dict = {} for i,row in df.iterrows(): my_dict[row['id']] = row['ID'] sub = pd.read_csv('SampleSubmission.csv') for i in range(len(sub)): sub.at[i,'class'] = test_predictions[my_dict[sub.at[i,'id']]]thanks