Quantcast
Channel: TAO Toolkit - NVIDIA Developer Forums
Viewing all articles
Browse latest Browse all 497

Where can I find sample config_infer_*.txt file to deploy .onnx model exported by TAO 5.5.0 in Deepstream 6.4?

$
0
0

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc): dGPU
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc): Classification
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here): TAO 5.5.0
• Training spec file(If have, please share here): NA
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.): NA

I have a.onnx files of models trained using TAO 5.5.0 and I want to deploy it with Deepstream 6.4, I cannot find an example config_infer_*.txt file for .onnx model, all available examples for for deploying .etlt files.

On this page it mentions the following:

So I looked for those sample files. Example provided by Deepstream 6.4:

Both of them are for .etlt files, not .onnx files.

The Deepstream 6.4 doc for TAO Toolkit Integration with DeepStream mentions using.etlt files, no mention of .onnx files.

Where can I find a sample config_infer_*.txt file to deploy .onnx files for Classification models directly in Deepstream 6.4?

2 posts - 2 participants

Read full topic


Viewing all articles
Browse latest Browse all 497

Trending Articles