• Hardware (T4/V100/Xavier/Nano/etc): Xavier NX
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc): Detectnet_v2
- JetPack 5.1.3
I want to run inference using * jetson-inference on LPDNet model. To do this I need to generate the TensorRT engine from the LPDNet ETLT model as explained here: jetson-inference/docs/detectnet-tao.md at master · dusty-nv/jetson-inference · GitHub.
The aforementioned manual uses tao-converter
tool. However it is deprecated and the tao-converter page says to use nvidia-tao-deploy
instead.
So I’ve installed the TAO launcher and run the following command:
tao deploy detectnet_v2 gen_trt_engine -m usa_pruned.etlt -r export -k nvidia_tlt --data-type int8
and got the following output:
2025-06-18 03:18:01,058 [TAO Toolkit] [INFO] root 160: Registry: ['nvcr.io']
2025-06-18 03:18:01,375 [TAO Toolkit] [INFO] nvidia_tao_cli.components.instance_handler.local_instance 360: Running command in container: nvcr.io/nvidia/tao/tao-toolkit:5.5.0-deploy
2025-06-18 03:18:01,522 [TAO Toolkit] [INFO] nvidia_tao_cli.components.docker_handler.docker_handler 301: Printing tty value True
Docker instantiation failed with error: 500 Server Error: Internal Server Error ("failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'csv'
invoking the NVIDIA Container Runtime Hook directly (e.g. specifying the docker --gpus flag) is not supported. Please use the NVIDIA Container Runtime (e.g. specify the --runtime=nvidia flag) instead.: unknown")
How do I fix the error and generate the TensorRT engine from the LPDNet ETLT model?
3 posts - 2 participants