Quantcast
Channel: TAO Toolkit - NVIDIA Developer Forums
Viewing all articles
Browse latest Browse all 497

Inference on LPDNet onnx file

$
0
0

Hi, I am currently trying to use the LPDNet in my environment. I have downloaded the LPDNet_usa_pruned_tao5.onnx file and understood it is based on detectnet_v2.

I cannot, however, figure out how to run the inference using onnxruntime in python, and specifically process the outputs…

I tried following the python script from this topic: Run PeopleNet with tensorrt - #21 by carlos.alvarez

But it seems as the trt engine return a continuous array that the onnx does not.

This is my preprocessing function:

image = Image.fromarray(np.uint8(arr))

image_resized = image.resize(size=(self.model_w, self.model_h), resample=Image.BILINEAR)
img_np = np.array(image_resized, dtype=np.float32)

HWC → CHW

img_np = img_np.transpose((2, 0, 1))

Normalize to [0.0, 1.0] interval (expected by model)

img_np = (1.0 / 255.0) * img_np
img_np = np.expand_dims(img_np, axis=0)
return img_np

I figured the modle input is (1, 3, 480, 640). In addition the arr variable is an RGB image (read through cv2 and converted from BGR to RGB).

I inference using:

input_dict = {}
input_dict[inputs.name] = data
outs = self.predictor.run(self.outputs, input_dict)
return outs

(inputs.name = ‘inputs_1:0’ and self.outputs = None)

the outs i get is a list consisting of:
[np.ndarray(shape=(1,1,30,40)), np.ndarray(shape=(1,4,30,40))]

I’m guessing the first array is the confidences and the second is the boxes. But how do i postprocess this into actual results…? In addition the maximum confidence I get is 0.000146…

Thanks alot.

1 post - 1 participant

Read full topic


Viewing all articles
Browse latest Browse all 497

Trending Articles