Quantcast
Channel: TAO Toolkit - NVIDIA Developer Forums
Viewing all articles
Browse latest Browse all 409

What is "export specification file" for TAO deploy?

$
0
0

Hello there,

Recently, we have trained a custom YOLOv4 model with the NVIDIA TAO API, and exported the trained model.onnx file. Now, the goal is to use TAO deploy to convert to TensorRT engine, on Jetson hardware.

My team has set up tao-deploy on the Jetson successfully by pulling the appropriate TensorRT container and installing tao-deploy with pip.

Instructions say “Same spec file can be used as the tao model yolo_v4 export command”:

  • -e, --experiment_spec: The experiment spec file to set up the TensorRT engine generation. This should be the same as the export specification file.

Where do we obtain this “export specification file” ? Is this produced at the time of export API action being run?

We used the API to download all file artifacts generated by export API action:

labels.txt   logs_from_toolkit.txt   model.onnx   nvinfer_config.txt  status.json

None of these seem to the correct file. Any suggestions?

3 posts - 2 participants

Read full topic


Viewing all articles
Browse latest Browse all 409

Trending Articles