Quantcast
Channel: TAO Toolkit - NVIDIA Developer Forums
Viewing all articles
Browse latest Browse all 409

TAO LPRNET inference

$
0
0

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc) RTX4060
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc) LPRnet
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here) 5.0
• Training spec file(If have, please share here)
lpr_spec.txt (1.1 KB)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

Hi, i have trained a TAO model and the output is an onnx model. I am trying to run inference using the TAO toolkit with the below command:

tao model lprnet inference -m /workspace/model.onnx -i /workspace/image.jpg -e /workspace/lpr_spec.txt -k nvidia_tlt

However, it prompt the below error:

INFO: Starting LPRNet Inference.
INFO: Merging specification from /workspace/tao_ws/lpr_spec.txt
Unsupported model type: .onnx
INFO: Inference was interrupted
Execution status: PASS

Any advice? May i know why it prompted unsupported model type. Thanks

4 posts - 2 participants

Read full topic


Viewing all articles
Browse latest Browse all 409

Trending Articles