Quantcast
Channel: TAO Toolkit - NVIDIA Developer Forums
Viewing all articles
Browse latest Browse all 497

Classification FT2 - Question about eporting trt engine

$
0
0

Please provide the following information when requesting support.

• Hardware (RTX)
• Network Type (Classification)
• TLT Version (5.5.0)
• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

I’m having some difficulties exporting a SGIE/TF2 classifier to my Deepstream application. With previous versions I had no problems with the process, it was exporting and using the TAO converter to convert an .etlt to a .trt engine.

With the TAO Image classification (TF2) everything in the ipynb notebook is clear untill step 10
Note: I’m training a model with QAT.

When exporting the QAT model with:

Convert QAT model to TensorRT engine

!mkdir -p $LOCAL_EXPERIMENT_DIR/export_qat
!sed -i “s|EXPORTDIR|$USER_EXPERIMENT_DIR/export_qat|g” $LOCAL_SPECS_DIR/spec_retrain_qat.yaml
!tao model classification_tf2 export -e $SPECS_DIR/spec_retrain_qat.yaml

It outputs only a efficientnet-b0.qat.onnx

If I want to convert the trained model for a Jetson (deepstream app), do I use the onnx model as input for the TAO-converter?
The TAO-converter only specifies how to use a etlt as input.
Or do I need to use TRTexec?

Im kinda lost Thnanks in advance for the help!

2 posts - 2 participants

Read full topic


Viewing all articles
Browse latest Browse all 497

Trending Articles