Quantcast
Channel: TAO Toolkit - NVIDIA Developer Forums
Viewing all articles
Browse latest Browse all 497

Issues with batch size with DINO

$
0
0

• Hardware Platform (Jetson / GPU)dGPU aws T4
• DeepStream Version → 6.4
• TensorRT Version ->8.6

We are running inferencing on videos using Deepstream 6.4 for DINO model
We tried to generate engine files using the DINO ONNX file we generated with TAO toolkit which had batch_size=-1 in the configuration for pytorch to ONNX conversion.

We were able to get the Engine file generated from deepstream pipeline for batch-size=1 in config and run the inference

but when running for higher batch sizes, a model was generated but when running inference I got Error: A batch of multiple frames received from the same source. Set sync-inputs property of streammux to TRUE. Since we are running for a single video we need to have single source so as stated above I added the following with existing streammux configurations


After adding this, Error disappeared but the pipeline didn’t process and gave any output as it did for batch-size 1.

A part of the Model config is as follows.

[property]
gpu-id=0
onnx-file=../../inference_base/dino/dino_model_v1.onnx
labelfile-path=../../inference_base/dino/labels.txt
model-engine-file=../../inference_base/dino/dino_model_v1.onnx_b8_gpu0_fp16.engine
batch-size=8
network-mode=2

My questions are

  1. Do any additional settings need to be done to make it work?
  2. Is the way we generated onnx file will affect it which could cause this behavior?
  3. Can we have increased batch size for a single source

1 post - 1 participant

Read full topic


Viewing all articles
Browse latest Browse all 497

Trending Articles