Quantcast
Channel: TAO Toolkit - NVIDIA Developer Forums
Viewing all articles
Browse latest Browse all 497

LPDNet post processing

$
0
0

• Hardware Platform: GPU
Tensor RT version: TensorRT 8.6.1.6

LPDNet model

using the LPDNet_usa_pruned_tao5.onnx file to load and run on triton inference server. I am getting the inference results using this model.

config file

name: “lpdnet”
platform: “onnxruntime_onnx”
max_batch_size : 1
input [
{
name: “input_1:0”
data_type:TYPE_FP32
format: FORMAT_NCHW
dims: [3,480,640]
}
]
output [
{
name: “output_cov/Sigmoid:0”
data_type:TYPE_FP32
dims: [1,30,40]
}
]
output [
{
name: “output_bbox/BiasAdd:0”
data_type:TYPE_FP32
dims: [4,30,40]
}
]
dynamic_batching { }

How to do post processing in this LPDNet model?
In the taotoolkil sample code for detectnet_v2 , a clustering config is required. What clustering config should i use for postprocessing in this case?
Is any code available for doing post processing in python

4 posts - 2 participants

Read full topic


Viewing all articles
Browse latest Browse all 497

Trending Articles