Good afternoon!
In such a competition, any changes to the model's forecast play an important role. I can guarantee and, if necessary, demonstrate reproducibility of the solution (12 minutes) on my Mac M3, but on another machine, the pipeline result may be different.
Can you tell me if such a demonstration is possible on my part after sending the full code to you?
That's a bummer. One quick trick is to use the exact package version you used. You can do that by using pip freeze if you are using python then place them in a requirement file. That way, reproducibility might likely be solved through that. I hope that helps.
I've already used pip freeze too. Unfortunately, 100% reproducibility on another machine is not guaranteed even with the same libraries.
Have you tried this on a universal cloud platform like kaggle or colab? It's best practice to code on such platforms to avoid reproducibility problems.
Yes, I tried it there. And with completely identical libraries, the result is different from my computer. Since the deadlines have been increased, I will try to solve this problem.
Sure, Good Luck!