Quantcast
Channel: TAO Toolkit - NVIDIA Developer Forums
Viewing all articles
Browse latest Browse all 695

Deploy yolo_v4 to Deepstream 7.0

$
0
0

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc) T4
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc) yolo_v4
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here) i’ve installed deepstream:7.0-samples-multiarch dcoker from documentation
• Training spec file(If have, please share here) don’t needed
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

I’ve used deepstream:7.0-samples-multiarch and trained yolo_v4 with default dataset and get onxx file with config file and then I add some kind of neccerly thing like onnx-file and lablepath-file and gpu-id and unique-gie or something like that to configuration file but when it comes to deploying it to my docker and do some stuff that said in GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream (branch ds7.0 using prebuild tensorrt 8.6.2 that he talked about but my docker dependencies was different, by the way i add deepstream python binding to it using deepstream 7.0 installarion guide) I got error that said incorrect output it seems like it can’t understand yolo_v4 output

1 post - 1 participant

Read full topic


Viewing all articles
Browse latest Browse all 695

Trending Articles