Quantcast
Channel: TAO Toolkit - NVIDIA Developer Forums
Viewing all articles
Browse latest Browse all 497

DINO model integration to DeepStream

$
0
0

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)dGPU aws T4
• DeepStream Version → 6.4
• TensorRT Version ->8.6

I’m trying to integrate DINO model to deepstream, I made the libnvds_infercustomparser_tlt.so file as stated in this link Deploying to DeepStream for DINO and I stated these on the model config.txt file

parse-bbox-func-name=NvDsInferParseCustomNMSTLT
custom-lib-path=../../libnvds_infercustomparser_tlt.so

And when I run it it gives this issue:

0:00:09.138296035   115 0x5571b430a160 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.4/realtime/rtsp_restreamer_dino/DINO/dino_model_v1.onnx_b1_gpu0_fp32.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT inputs          3x544x960       
1   OUTPUT kFLOAT pred_logits     900x91          
2   OUTPUT kFLOAT pred_boxes      900x4           

0:00:09.245102808   115 0x5571b430a160 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.4/realtime/rtsp_restreamer_dino/DINO/dino_model_v1.onnx_b1_gpu0_fp32.engine
0:00:09.250499473   115 0x5571b430a160 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:1/configs/model_config.txt sucessfully
[ip-192-168-2-191:115  :0:135] Caught signal 11 (Segmentation fault: invalid permissions for mapped object at address 0x7ffab1800010)
==== backtrace (tid:    135) ====
 0 0x0000000000042520 __sigaction()  ???:0
 1 0x00000000000075ed NvDsInferParseCustomNMSTLT()  ???:0
 2 0x000000000003ad8c nvdsinfer::DetectPostprocessor::fillDetectionOutput()  ???:0
 3 0x00000000000175d7 nvdsinfer::DetectPostprocessor::parseEachBatch()  ???:0
 4 0x000000000001eb0a nvdsinfer::InferPostprocessor::postProcessHost()  ???:0
 5 0x00000000000197e8 nvdsinfer::NvDsInferContextImpl::dequeueOutputBatch()  ???:0
 6 0x000000000001bc6d gst_plugin_nvdsgst_infer_register()  ???:0
 7 0x0000000000084a51 g_thread_unref()  ???:0
 8 0x0000000000094ac3 pthread_condattr_setpshared()  ???:0
 9 0x0000000000125bf4 clone()  ???:0
=================================
Segmentation fault (core dumped)

I’m running on an AWS instance inside docker container,

4 posts - 2 participants

Read full topic


Viewing all articles
Browse latest Browse all 497

Trending Articles