Quantcast
Channel: TAO Toolkit - NVIDIA Developer Forums
Viewing all articles
Browse latest Browse all 497

Output in deepstream wrong and difference compare to infer by TAO- Yolov4 Tiny

$
0
0

I trained my LPR with yolov4 tiny. Infer’s result by Tao is good, but i deploy to deepstream 6.3 is output return wrong.
This my config:

[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
tlt-model-key=yolov4_tiny_crn_PLR
onnx-file=/deploy/Yolov4_tiny/license_plate_rec/yolov4_cspdarknet_tiny_epoch_080.onnx
labelfile-path=/deploy/Yolov4_tiny/label_plate_rec.txt
int8-calib-file=/deploy/Yolov4_tiny/license_plate_rec/cal.bin
model-engine-file=/deploy/Yolov4_tiny/license_plate_rec/yolov4_cspdarknet_tiny_epoch_080.onnx_b1_gpu0_fp32.engine
infer-dims=3;160;160
batch-size=1
process-mode=2
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
#0 detector 1 classifier 2 segmentatio 3 instance segmentation
network-type=0
num-detected-classes=35
interval=0
gie-unique-id=3
operate-on-class-ids=2;4
operate-on-gie-id=1

model-color-format=1
maintain-aspect-ratio=1


output-blob-names=BatchedNMS
#parse-classifier-func-name=NvDsInferParseCustomYoloV4LPR
parse-bbox-func-name=NvDsInferParseCustomYoloV4TLT
custom-lib-path=/deploy/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so


2 posts - 2 participants

Read full topic


Viewing all articles
Browse latest Browse all 497

Trending Articles