Primary competition visual

Adbot Ad Engagement Forecasting Challenge

Helping South Africa
$500 USD
Completed (almost 2 years ago)
Forecast
452 joined
113 active
Starti
Apr 04, 24
Closei
May 19, 24
Reveali
May 19, 24
User avatar
Koleshjr
Multimedia university of kenya
12.27
Platform Ā· 22 May 2024, 17:16 Ā· 4

Hello Zindians,

Attached is a notebook to get you 12.27, potentially second on the lb. I would not have chosen this sub anyway as I had more sound approaches than this , so let us say this was just a lucky score. I would love to hear from more experienced guys like @yanteixeira why such an approach would get such a good score , because I also am confused and it has led to me thinking that this competition was only luck based. The luckiest solution won , and not the most sound solution. But this is likely not true for those who had public lb / private lb correlation.

The sound approach I am talking about here is where I train a model for each agent which honestly did not give me a very good result .

Also I would love the top teams to share their approaches too as I believe the time period of sending notebooks for evaluation has already ended ;)

Github link:

koleshjr/ADBOT: Can you predict the future success of a digital advert? (github.com)

If You find it helpful please star it. Makes me more motivated to open source more solutions. Thank you

And our good gist trick to post the code in the discussions page:) You guys should use this trick more often.

/train-adbot.ipynb?authuser=1

Discussion 4 answers
User avatar
yanteixeira

I'm curious why your solution also scored well. I'm comparing the results of yours and mine. Here's something interesting...

It's surprising that our models have such a large difference in predictions despite achieving similar performance on the private leaderboard.

Here's the most interesting finding: I looked at your submission and only changed the prediction for ID_629e729214035d4a8541e0b8_2024_02_28 from 444 (your original value) to 1107 (my model prediction). All other predictions remained the same. This resulted in:

So basically, nothing change! How? I also did the same experiment with my submission. I changed 1107 to your 444, and the output was the same score I was getting.

22 May 2024, 19:08
Upvotes 0
User avatar
Koleshjr
Multimedia university of kenya

Maybe it was in the public leaderboard. That submission's public lb was >70 if I remember correctly

User avatar
yanteixeira

Oh, you are right. It seems I picked one example from the LB. I would have noticed if the public and private scores didn't have this bug.

Probably all the big differences between our models come from public LB IDs, and since I got a better LB score than you, we can assume that 1107 is closer to the true value than 444.

Well, we solved the mystery. RMSE punishes mispredictions very hard, and it seems your model got 1 or 2 IDs from the public LB wrong. BUT, if it weren't for these cases, your model is probably the best one here.

I don't think there is a way to know if this particular model would be good in the private LB because I don't think there is a single CV scheme that works for this problem. I wouldn't say 'The luckiest solution won' until we see @51pegasi solution because he is the only one in the top 10 with closer LB and private scores.

User avatar
Koleshjr
Multimedia university of kenya

@yanteixeira true , we will wait for @51pegasi write up. also yours isn't far from each other. 7 and 12 , and they are both impressive. I will also wait for your write up to learn how you handled it.