RuntimeError: CUDA out of memory. Tried to allocate 56.00 MiB (GPU 0; 15.90 GiB total capacity; 14.97 GiB already allocated; 57.75 MiB free; 15.06 GiB reserved in total by PyTorch)
I tried this posts:
https://forums.fast.ai/t/clearing-gpu-memory-pytorch/14637/2
https://stackoverflow.com/questions/55322434/how-to-clear-cuda-memory-in-pytorch
But nothing worked. Received the same error in kaggle kernel.
Need suggestions!!!
dude just reduce sequence length and reduce batch size. It's that simple.