Primary competition visual

AI4D Yorùbá Machine Translation Challenge

Helping Nigeria
$2 000 USD
Completed (almost 5 years ago)
Machine Translation
683 joined
84 active
Starti
Dec 04, 20
Closei
May 30, 21
Reveali
May 30, 21
Colab pytorch gpu memory runout error
Help · 27 Mar 2021, 16:45 · edited 1 minute later · 1

RuntimeError: CUDA out of memory. Tried to allocate 56.00 MiB (GPU 0; 15.90 GiB total capacity; 14.97 GiB already allocated; 57.75 MiB free; 15.06 GiB reserved in total by PyTorch)

I tried this posts:

https://forums.fast.ai/t/clearing-gpu-memory-pytorch/14637/2

https://stackoverflow.com/questions/55322434/how-to-clear-cuda-memory-in-pytorch

But nothing worked. Received the same error in kaggle kernel.

Need suggestions!!!

Discussion 1 answer

dude just reduce sequence length and reduce batch size. It's that simple.

27 Mar 2021, 17:39
Upvotes 0