We have minutes 1 to 15 available as input for the test segments, and the task is to forecast minutes 17, 18, 19, 20, and 21 — the next 5 minutes, with a one-minute embargo (meaning minute 16 is excluded).
“Back-propagation” in this context means you’re not allowed to, say, use the prediction for minute 19 to help predict minute 18.
This also makes sense from a real-world perspective: you cannot say use information from minute 21 — which hasn’t happened yet — to predict congestion in minute 20 while minute 20 is occurring.
Inference is forward-only—no weight updates—but you can still adapt via prompts or external logic. Think Crossy Road rules: play smarter each run, don’t rewrite the game mid-hop.
Nice analogy—learning without changing weights highlights how strategy, context, and crossy road game iteration can still drive smarter outcomes over time.
It means that during inference you can’t use gradients or future outputs to adjust predictions, and you can’t leak future information into current predictions — normal training before inference is allowed, but inference has to be forward-only with no backprop steal a brainrot updates.
If back-propagation isn’t allowed during training or inference, it changes how we optimize performance. Sometimes constraints spark creativity — just like adapting mid-air in Wacky Flip to still land perfectly.
The way I understand it from this discussion: 🚆 Trending Now: 🚧 Challenge Update! - 751 Views:
We have minutes 1 to 15 available as input for the test segments, and the task is to forecast minutes 17, 18, 19, 20, and 21 — the next 5 minutes, with a one-minute embargo (meaning minute 16 is excluded).
“Back-propagation” in this context means you’re not allowed to, say, use the prediction for minute 19 to help predict minute 18.
This also makes sense from a real-world perspective: you cannot say use information from minute 21 — which hasn’t happened yet — to predict congestion in minute 20 while minute 20 is occurring.
Much appreciated 👍🏽
Inference is forward-only—no weight updates—but you can still adapt via prompts or external logic. Think Crossy Road rules: play smarter each run, don’t rewrite the game mid-hop.
Nice analogy—learning without changing weights highlights how strategy, context, and crossy road game iteration can still drive smarter outcomes over time.
It means that during inference you can’t use gradients or future outputs to adjust predictions, and you can’t leak future information into current predictions — normal training before inference is allowed, but inference has to be forward-only with no backprop steal a brainrot updates.
Traffic Rider is a first-person driving game where you drive a motorcycle through traffic at maximum speed.
If back-propagation isn’t allowed during training or inference, it changes how we optimize performance. Sometimes constraints spark creativity — just like adapting mid-air in Wacky Flip to still land perfectly.
Thanks
haha drive mad