Quantcast
Channel: TAO Toolkit - NVIDIA Developer Forums
Viewing all articles
Browse latest Browse all 497

Inference using onnx and TAO?

$
0
0

Hello,

Can TAO containers do inference using the ONNX model that is exported from HDF5 instead of using the HDF5 itself?

For instance, using:

!tao model faster_rcnn inference --gpu_index $GPU_INDEX \
                                   -e $SPECS_DIR/specs.txt \
                                   -m $USER_EXPERIMENT_DIR/model.onnx

instead of:

!tao model faster_rcnn inference --gpu_index $GPU_INDEX \
                                   -e $SPECS_DIR/specs.txt \
                                   -m $USER_EXPERIMENT_DIR/model.hdf5

Or, do they only infer using the HDF5 and the trt engine?

Thanks

1 post - 1 participant

Read full topic


Viewing all articles
Browse latest Browse all 497

Trending Articles