Quantcast
Channel: TAO Toolkit - NVIDIA Developer Forums
Viewing all articles
Browse latest Browse all 497

Inference Issues with Mobilenet_v2 Custom Training Model Using TAO Toolkit in DeepStream

$
0
0

• Hardware - A2000
• Network Type (Classification) - Mobilenet_v2
• Docker container name - (nvcr.io/nvidia/tao/tao-toolkit:5.3.0-deploy)
• How to reproduce the issue ?

I have trained an image classification model using TensorFlow, and the network type is Mobilenet_v2 using nvidia-tao toolkit. I was able to get the TLT files and convert them into ONNX files using the command:

tao model classification_tf2 train -e /workspace/tao-experiments/specs/spec.yaml --gpus 1

tao model classification_tf2 export -e /workspace/tao-experiments/specs/spec.yaml --gpus 1

I also generated the TRT engine file using the command:

tao deploy classification_tf2 gen_trt_engine -e /workspace/tao-experiments/specs/spec.yaml

spec.txt (1.6 KB)

When I try to perform inference in DeepStream using the generated engine file, I am getting a classifier meta as None. I have attached the SGIE config file and the DeepStream code that I have used. Can anyone help resolve this issue?

dstest2_sgie1_config.txt (2.2 KB)

deepstream_test_2.txt (10.7 KB)

4 posts - 2 participants

Read full topic


Viewing all articles
Browse latest Browse all 497

Trending Articles