Quantcast
Channel: TAO Toolkit - NVIDIA Developer Forums
Viewing all articles
Browse latest Browse all 497

I want to output quantized ONNX models with TAO Toolkit

$
0
0

Hi.

I want to quantize a model trained with Detectnet_v2 in TAO Toolkit and export it in ONNX format.
I tried to run the command “tao model detectnet_v2 export” after creating calibration.sensor with reference to the following page, but it seems that the parameters of the ONNX model are not quantized.
Am I doing something wrong?

TAO Toolkit version: 5.1.0
Network: Detectnet_v2(pretrained model: mobilenet_v2)

Thank you.


tao model detectnet_v2 export
-m $USER_EXPERIMENT_DIR/experiment_dir_retrain/weights/mobilenet_detector_retrained.hdf5
-e $USER_SPECS_DIR/detectnet_v2_mobilenet_retrain.txt
-o $USER_EXPERIMENT_DIR/experiment_dir_final/mobilenet_detector.onnx
–onnx_route tf2onnx
–gen_ds_config
–data_type int8
–batches 10
–cal_cache_file $USER_EXPERIMENT_DIR/experiment_dir_final/calibration.bin
–cal_data_file $USER_EXPERIMENT_DIR/calibration.tensor

3 posts - 2 participants

Read full topic


Viewing all articles
Browse latest Browse all 497

Trending Articles