I’ve been training PointPillarNets using the custom dataset containing self-collected pointcloud and
annotation data and the exist KITTI dataset. Currently I have the training dataset containing over 30K samples.
I ran pointpillars.ipynb on firefox and all the samples mentioned above had to be processed by running
the following snippet:
!tao model pointpillars dataset_convert -e $SPECS_DIR/pointpillars.yaml
The problem I encountered is that I intended to add more samples and then run dataset_convert so as to finetune my model. I encountered the " Your Tab Just Crashed" several times.
I then reduced the number of samples back to 30K and things went well again.
I traced the system memory usage using the htop command and noticed that the tab crashed when the memory was completely used (64 GB memory for my training machine). The memory usage kept rising when dataset_convert was running until the tab crashed.
If this is exactly what made the tab crash, that means I can’t add more samples anymore due to the limited memory capacity.
Is there any workaround or way to avoid full memory usage?
1 post - 1 participant