Hardware Platform: Jetson Orin
DeepStream Version: 6.3
JetPack Version: 5.1.2
TensorRT Version: 8.5.2
Issue Type: question
To generate a calibration cache file, I use the command “tao-deploy yolo_v4 gen_trt_engine” including the etlt file and the images folder (–cal_image_dir) on my server. The generated cal.bin is used to export in INT8 mode on the Jetson. It has worked without problems when I use Deepstream 6.0 on Jetson Xavier. However, when I use that calibration file on Jetson Orin, the inferences do not work correctly. I guess it is a problem with the cal.bin generated with another version of TensorRT.
I want to generate the cal.bin using the Jetson Orin but I can’t find any option using tao-convert or another tool. Can you please help me with the command to generate the calibration cache file on the Jetson?
5 posts - 2 participants