• Hardware : NVIDIA GeForce RTX 2060
• Network Type : UNet
• TLT Version : nvidia/tao/tao-toolkit-tf:
v3.22.05-tf1.15.5-py3
• Training spec file
tao_unet_05_08_24_train_v5.txt (1.7 KB)
In the uploaded image you can see the console output timestamp for the last step and the system time. The training has got stuck for ~20 hrs. This has happened for the 2nd time for the given training config. I have ran 4 other trainings with different training configs and did not face this issue, previous training also had the same input size for the network as well as the same batch size. Usually if its a GPU memory issue, tao exits with a memory out of space error, no error messages are popping up. GPU memory utilization is 74% and I can see GPU activity in nvtop
.
Also attaching a screenshot of htop
. You can see that RAM utilization is also low.
6 posts - 2 participants