Quantcast
Channel: TAO Toolkit - NVIDIA Developer Forums
Viewing all articles
Browse latest Browse all 497

Converting TAO-trained MaskRCNN models to ONNX for CPU inference

$
0
0

• Hardware (RTX 3090)
• Network Type (Mask_rcnn)

Hi, I want to perform CPU inference using TAO-trained MaskRCNN models. I previously was able to do so using TAO-trained DetectNet_v2 models by converting them into ONNX, and I was hoping to do the same for TAO-trained MaskRCNN models.

I’m trying to do this as I want to use the ONNX weights with a CPU-only instance of Triton Inference Server. It currently works with the TAO-trained DetectNet_v2-converted ONNX models, and I’m trying to figure out how to do so for the TAO-trained MaskRCNN models as well.

I understand that back in 2022, there was no support for direct TAO-trained MaskRCNN → ONNX conversion. Does this still hold true as of today?

If yes, are there other, perhaps indirect ways to perform this conversion? (E.g. converting/exporting into an intermediary file format that can then be converted into ONNX?)

3 posts - 2 participants

Read full topic


Viewing all articles
Browse latest Browse all 497

Trending Articles