Hello,
Can TAO containers do inference using the ONNX model that is exported from HDF5 instead of using the HDF5 itself?
For instance, using:
!tao model faster_rcnn inference --gpu_index $GPU_INDEX \
-e $SPECS_DIR/specs.txt \
-m $USER_EXPERIMENT_DIR/model.onnx
instead of:
!tao model faster_rcnn inference --gpu_index $GPU_INDEX \
-e $SPECS_DIR/specs.txt \
-m $USER_EXPERIMENT_DIR/model.hdf5
Or, do they only infer using the HDF5 and the trt engine?
Thanks
1 post - 1 participant