AI4D Yorùbá Machine Translation Challenge
$2,000 USD
Can you translate Yorùbá to English?
445 data scientists enrolled, 63 on the leaderboard
TranslationUnstructuredNLP
Nigeria
4 December 2020—30 May 2021
Ends in 1 month
Colab pytorch gpu memory runout error
published 27 Mar 2021, 16:45
edited 1 minute later

RuntimeError: CUDA out of memory. Tried to allocate 56.00 MiB (GPU 0; 15.90 GiB total capacity; 14.97 GiB already allocated; 57.75 MiB free; 15.06 GiB reserved in total by PyTorch)

I tried this posts:

https://forums.fast.ai/t/clearing-gpu-memory-pytorch/14637/2

https://stackoverflow.com/questions/55322434/how-to-clear-cuda-memory-in-pytorch

But nothing worked. Received the same error in kaggle kernel.

Need suggestions!!!

dude just reduce sequence length and reduce batch size. It's that simple.