• Hardware - P4
• Network Type - Classification
• TLT Version - nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5
• Default training configuration for TF1 classification with resnet18/resnet50
I performed inference on test images using both the trained model (in HDF5 format) and the ONNX exported engine file. The predictions differed significantly, even to the extent of assigning different classes to the same images.
Retraining with nvcr.io/nvidia/tao/tao-toolkit:4.0.0-tf1.15.5 and testing with tlt model and etlt exported engine file provides much better results that can represent the models mAP.
I wasn’t able to save the model or export it in tlt/etlt for TAO 5.0.0, could that solve this issue? Is it supported?
Is there a way to fix this in that version of TAO?
2 posts - 2 participants