Quantcast
Channel: TAO Toolkit - NVIDIA Developer Forums
Viewing all articles
Browse latest Browse all 497

Tao Pointpillars for CPU infrence

$
0
0

I have trained a pointpillars model from my application, I need to run this model on a computer with only an intel CPU and no GPU. It looked like the onnxruntime is the best solution for this. I have tried to import the model into the onnxruntime, but I am not getting the plugins to populate. Looking at the TensorRT repo the plugins all seem to pull in nvinfer and Cuda which to my understanding I can not use on my computer. Is my best option to rewrite the plugins to not use Cuda and NVinfer, or is there a better option that I am missing to run inference on a CPU only device.
Thanks

1 post - 1 participant

Read full topic


Viewing all articles
Browse latest Browse all 497

Trending Articles