Quantcast
Channel: TAO Toolkit - NVIDIA Developer Forums
Viewing all articles
Browse latest Browse all 497

Incorrect pointpillar inference results

$
0
0

Actually this is not directly but indirectly related to TAO as what I’m gonna describe here currently has nothing to do with training my own pointpillarnet model using TAO toolkit, but I downloaded the pretrained model on NGC and used it to produce a .engine file for inference.

Description:
First, I downloaded the pretrained model, pointpillars_deployable.etlt, from NGC.

PointPillarNet

After that, I ran the following nvidia docker to continue as suggested:

docker run --runtime=nvidia -it --rm -v /home/averai:/averai nvcr.io/nvidia/tao/tao-toolkit:5.1.0-pyt /bin/bash

In the docker, the desired .engine file for inference was produced using tao-converter:

./tao-converter -k tlt_encode -e output.engine -p points,1x204800x4,1x204800x4,1x204800x4 -p num_points,1,1,1 -t fp16 pointpillars_deployable.etlt

I used git clone to pull the 2 following code packs, respectively:

viz_3Dbbox_ros2_pointpillars

tao_toolkit_recipes

Both packs are related to each other as run_all_pcs.py actually calls the built ./pointpillar in tao_toolkit_recipes.

I followed the instruction here to run the following commands:

cd tao_pointpillars/tensorrt_sample/test
mkdir build
cd build
cmake … -DCUDA_VERSION=$CUDA_VERSION
make -j8

image

After the successful make, I copied the .engine file and a .bin file to the build folder.
The .bin file I referred to was downloaded 2 months ago when I was trying to train a pointpillar model using TAO toolkit for the first time. It was processed in order to retrieve FOV-only LIDAR points from 360-degree LIDAR points according to the comment of gen_lidar_points.py.

I then ran the following command to do inference.

./pointpillars -e output.engine -l ./input_bin/000000.bin -t 0.01 -c Vehicle,Pedestrain,Cyclist -n 4096 -p -d fp16

It worked and saved the detection result in a .txt file but the result looked strange.
I drew the bounding boxes by using viwer.py from viz_3Dbbox_ros2_pointpillars folder and both the said .bin and .txt files are treated as its input. The drawn bboxes are obviously incorrect.

Both visualized images were made by running viewer.py using the same .bin file.
The left one shows the annotated bboxes of the pedestrians while the right one shows the bboxes
acquired by inference. One of both boxes was even considered a vehicle.

To sum up, all I did was:

  1. .bin and .etlt model preparation
  2. .engine model conversion
  3. Building existing codes and running commands according to the instructions.

I didn’t make any modifications to all the content that’s related to running pointpillar inference.

22 posts - 2 participants

Read full topic


Viewing all articles
Browse latest Browse all 497

Trending Articles