We have minutes 1 to 15 available as input for the test segments, and the task is to forecast minutes 17, 18, 19, 20, and 21 — the next 5 minutes, with a one-minute embargo (meaning minute 16 is excluded).
“Back-propagation” in this context means you’re not allowed to, say, use the prediction for minute 19 to help predict minute 18.
This also makes sense from a real-world perspective: you cannot say use information from minute 21 — which hasn’t happened yet — to predict congestion in minute 20 while minute 20 is occurring.
Inference is forward-only—no weight updates—but you can still adapt via prompts or external logic. Think Crossy Road rules: play smarter each run, don’t rewrite the game mid-hop.
Nice analogy—learning without changing weights highlights how strategy, context, and crossy road game iteration can still drive smarter outcomes over time.
It means that during inference you can’t use gradients or future outputs to adjust predictions, and you can’t leak future information into current predictions — normal training before inference is allowed, but inference has to be forward-only with no backprop steal a brainrot updates.
It means the model is fixed—no weights are updated during training or inference, so you can’t use back-propagation to improve it while running predictions.
I randomly came across eggy car online while looking for something quick to play during breaks, and it’s surprisingly addictive. The balance mechanic is simple but really keeps you focused
Your question touches on a fundamental aspect of the competition rules. pips The restriction on back-propagation is designed to ensure a fair evaluation of the models' generalization capabilities, preventing any form of "leaking" or overfitting to the test set during the inference stage.
The way I understand it from this discussion: 🚆 Trending Now: 🚧 Challenge Update! - 751 Views:
We have minutes 1 to 15 available as input for the test segments, and the task is to forecast minutes 17, 18, 19, 20, and 21 — the next 5 minutes, with a one-minute embargo (meaning minute 16 is excluded).
“Back-propagation” in this context means you’re not allowed to, say, use the prediction for minute 19 to help predict minute 18.
This also makes sense from a real-world perspective: you cannot say use information from minute 21 — which hasn’t happened yet — to predict congestion in minute 20 while minute 20 is occurring.
Much appreciated 👍🏽
Inference is forward-only—no weight updates—but you can still adapt via prompts or external logic. Think Crossy Road rules: play smarter each run, don’t rewrite the game mid-hop.
Nice analogy—learning without changing weights highlights how strategy, context, and crossy road game iteration can still drive smarter outcomes over time.
It means that during inference you can’t use gradients or future outputs to adjust predictions, and you can’t leak future information into current predictions — normal training before inference is allowed, but inference has to be forward-only with no backprop steal a brainrot updates.
Traffic Rider is a first-person driving game where you drive a motorcycle through traffic at maximum speed.
haha drive mad
I appreciate the ease with which it is on my eyes. I would like to receive notification when a new post is published Granny 2.
It means the model is fixed—no weights are updated during training or inference, so you can’t use back-propagation to improve it while running predictions.
I randomly came across eggy car online while looking for something quick to play during breaks, and it’s surprisingly addictive. The balance mechanic is simple but really keeps you focused
Your question touches on a fundamental aspect of the competition rules. pips The restriction on back-propagation is designed to ensure a fair evaluation of the models' generalization capabilities, preventing any form of "leaking" or overfitting to the test set during the inference stage.