Please provide the following information when requesting support.
• Hardware (T4/V100/Xavier/Nano/etc): dGPU
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc): Classification
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here): TAO 5.5.0
• Training spec file(If have, please share here): NA
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.): NA
I have a.onnx
files of models trained using TAO 5.5.0 and I want to deploy it with Deepstream 6.4, I cannot find an example config_infer_*.txt
file for .onnx
model, all available examples for for deploying .etlt
files.
- Example provided on
deepstream_tao_apps
repo: deepstream_tao_apps/deepstream_app_tao_configs/nvinfer/config_infer_secondary_vehiclemakenet.txt at release/tao_ds7.1ga · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub. But this one is for Deepstream 7.1 not Deepstream 6.4.
On this page it mentions the following:
So I looked for those sample files. Example provided by Deepstream 6.4:
Both of them are for .etlt
files, not .onnx
files.
The Deepstream 6.4 doc for TAO Toolkit Integration with DeepStream mentions using.etlt
files, no mention of .onnx
files.
Where can I find a sample config_infer_*.txt
file to deploy .onnx
files for Classification models directly in Deepstream 6.4?
2 posts - 2 participants