# # Extract and generate NDVI, NBR, NDWI, and NDMI from s2 data in aws
train_s2_df = parallel_process(train_df, n_processes=4)
val_s2_df = parallel_process(val_df, n_processes=4)
test_s2_df = parallel_process(test_df, n_processes=4)
val_s2_df.shape, test_s2_df.shape, train_s2_df.shape
When I run the code above, even after hours of runtime, it won't progress. The previous cells have been run properly without error. What could possibly be the problem?
Late response, but:
One common problem would be to ensure you are using the GPU. If you are running torch in CPU mode accidentally on all the training data, that can be a big reason it would take a long time. Add a cell like:
```
if torch.cuda.is_available():
print("GPU is available")
else:
print("GPU is not available")
```
If you're still experiencing these issues and your GPU is working, one thing I did as I worked on mine was reduce the training size. My first model used just a single site for train/valid. I did so fearing what you're experiencing and using less data was an easier way to ensure things were working. That first model actually had reasonable performance. You can reduce the data and runtime in many other ways (partial epochs), and then expand back as you get an expectation.
Thank you very much for the response!