Quantcast
Channel: TAO Toolkit - NVIDIA Developer Forums
Viewing all articles
Browse latest Browse all 497

Running YoloV4 INT8 Version on Jetson NX Xavier: Compatibility of TensorRT Engine Generated on x86 Platform

$
0
0

• Hardware (Xavier/Nano)
• Network Type (Yolo_v4)

Hello,
I would like to run the INT8 version of YoloV4 on Jetson NX Xavier.

In this documentation [TAO Deploy Installation - NVIDIA Docs], it states " Due to memory issues, you should first run the gen_trt_engine subtask on the x86 platform to generate the engine; you can then use the generated engine to run inference or evaluation on the Jetson platform and with the target dataset.".
However, I understand that the conversion from ONNX to TensorRT should be done on the platform where the inference is actually performed.
Can the TensorRT engine generated on an x86 platform be directly used on the Jetson platform?
Thank you.

1 post - 1 participant

Read full topic


Viewing all articles
Browse latest Browse all 497

Trending Articles