Primary competition visual

Fowl Escapades

Helping Africa
2000 Zindi Points
Challenge completed over 5 years ago
Classification
195 joined
28 active
Starti
Jan 16, 20
Closei
Apr 19, 20
Reveali
Apr 20, 20
4th Place Solution
Notebooks · 22 Apr 2020, 02:26 · 7

Hi,

Congratulations to everyone who participated. Thanks especially to @Johnowhitaker for the starter notebook.

I have included my notebooks, the output files, and a short explanation of my solution here: https://github.com/btk1/Zindi-Fowl-Escapades

If you have any questions or comments, I will try my best to answer.

Thanks to Zindi, and I look forward to future competitions.

Discussion 7 answers

Thanks for sharing. Nice approach using progressive resizing. I had used fastai2 and kfold validation using xresnet50. Got 14th place. Will post code soon. Had faced some reproducibility issue which was reduced by cross validation. Could you please tell if you faced any reproducibility issue by using. Learnt a lot from Jonathan Whitaker and Radek Osmulski

Hi aninda_bitm,

Yes, I did have reproducibility issues. After a quick search, I have found the following code which seems to work (at least until you restart your kernel).

https://forums.fast.ai/t/solved-reproducibility-where-is-the-randomness-coming-in/31628/11

https://docs.fast.ai/dev/test.html#getting-reproducible-results

Hope this helps:

seed = 42

# python RNG

import random

random.seed(seed)

# pytorch RNGs

import torch

torch.manual_seed(seed)

torch.backends.cudnn.deterministic = True

if torch.cuda.is_available(): torch.cuda.manual_seed_all(seed)

# numpy RNG

import numpy as np

np.random.seed(seed)

Thanks for responding

User avatar
ASSAZZIN

hi ,

thanks for sharing your solution and congratulations ,I tried to use the pre trained model DENSNET161 but there is a problem

problem is : CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 7.43 GiB total capacity; 6.85 GiB already allocated; 2.94 MiB free; 6.91 GiB reserved in total by PyTorch)

Try restarting the kernel.

Hi sKzA,

Try reducing the batch size by half or even more. You will have to change the batch size another time when you do the resizing to 512x512.

I hope this helps.