Quantcast
Channel: TAO Toolkit - NVIDIA Developer Forums
Viewing all articles
Browse latest Browse all 409

YOLO v3 .engine.fp16 layer mismatch in deepStream inferencing

$
0
0

Complete information applicable to my setup -

Hardware Platform (GPU) - Tesla T4
DeepStream Version - Deepstream:6.1-Triton
TAO Toolkit Version - 5.0.0
TensorFlow version -1.15.5
TensorRT Version - 8.2.5-1+cuda11.4
NVIDIA GPU Driver Version - 535.183.01
CUDA Version: 12.2

I have one AWS vm which has that Nvidia Tesla T4 GPU.
I have trained yolo_v3 model in a the jupyter notebook of TAO framework.
That Gave me the model in .hdf5 format.
Then I have converted the .hdf5 model to .onnx format.
Then I have built a Deepstream Docker container on the same AWS vm.
Then I have exported that my model and tried to run the deepstream app in the Deepstream docker conatiner via the below command :-
deepstream-app -c app_config.txt

here is the deepstream app config file :
app_config.txt (3.2 KB)

here is the yolo_v3 config file :
yolov3_config.txt (986 Bytes)

here is the labels file :
labels.txt (220 Bytes)

Now I am getting error :
Command :
root@4e5e39ad1545:/opt/nvidia/deepstream/deepstream-6.1/samples/models/yolo_v3# deepstream-app -c app_config.txt

Error :
0:00:02.376014532 858 0x7771ec002380 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/samples/models/yolo_v3/yolov3_resnet18_epoch_200_retrain_QAT_trtexec.engine.fp16
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 5
0 INPUT kFLOAT Input 3x384x1248
1 OUTPUT kINT32 BatchedNMS 1
2 OUTPUT kFLOAT BatchedNMS_1 200x4
3 OUTPUT kFLOAT BatchedNMS_2 200
4 OUTPUT kFLOAT BatchedNMS_3 200

0:00:02.395842644 858 0x7771ec002380 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2003> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/samples/models/yolo_v3/yolov3_resnet18_epoch_200_retrain_QAT_trtexec.engine.fp16
0:00:02.453493076 858 0x7771ec002380 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.1/samples/models/yolo_v3/yolov3_config.txt sucessfully

Runtime commands:
h: Print this help
q: Quit

    p: Pause
    r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.

** INFO: <bus_callback:194>: Pipeline ready

Warning: Color primaries 5 not present and will be treated BT.601
** INFO: <bus_callback:180>: Pipeline running

ERROR: yoloV3 output layer.size: 4 does not match mask.size: 3
0:00:02.617463720 858 0x58613a5ec400 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:726> [UID = 1]: Failed to parse bboxes using custom parse function
Segmentation fault (core dumped)

Now Please tell me the solution to this .

1 post - 1 participant

Read full topic


Viewing all articles
Browse latest Browse all 409

Trending Articles