Quantcast
Channel: TAO Toolkit - NVIDIA Developer Forums
Viewing all articles
Browse latest Browse all 497

Preprocessing steps for UNET using TensorRT

$
0
0

Continuing the discussion from Why we need image pre and post processing when deploying a tlt model by using TensorRT?:

I’ve trained a UNET using TAO-Toolkit and tested inference with with
“tao model unet inference …” and the results looked pretty good.

I then exported the onnx file and used my local TenorRT trtexec to convert it to an .engine file. When I now run this engine in C++ using the same images my results are not that good. I exported with FP32, so I am expecting very similar output from local TensorRT.

My preprocessing looks the following:

  1. Load image with opencv
  2. resize image to network size (cv::resize)
  3. convert colors from BGR to RGB (cv::cvtColor)
  4. Calculate mean and std deviation over image
  5. convert image to float and normalize: pixel_float = (float(pixel_uchar)/255.0 - mean) / stdDev

As a source for step 5 I used TensorRT example:
TensorRT/quickstart/common/util.cpp at release/8.6 · NVIDIA/TensorRT (github.com)

In some other files I found that TAO is using BGR color space and also transposes the image with [2.0, 0.0, 1.0]:
tao_deploy/nvidia_tao_deploy/cv/unet/dataloader.py at main · NVIDIA/tao_deploy (github.com)

But what do I need to do, to make my TensorRT results most similar to tao inference results?

TAO Info:
task_group: [‘model’, ‘dataset’, ‘deploy’]
format_version: 3.0
toolkit_version: 5.3.0
published_date: 03/14/2024

local:
TensorRT: 8.6.1.6
CUDA: 12.1 update 1
cuDNN 8.9.0

Thank you

2 posts - 2 participants

Read full topic


Viewing all articles
Browse latest Browse all 497

Trending Articles