Generating TensorRT engine from TAO deploy
Hi. When converting models using tao deploy from onnx to tensorrt engine if we need to convert to FR16. Will it convert to FP32 and convert to FP16 or will it convert directly to FP16? If there are...
View ArticleResize_interpolation_method: BILEANER': Enum type "InterpolationMethod" has...
Please provide the following information when requesting support. • Hardware (T4) • Network Type (Resnet18 Classification) • TAO Version (5.3.0) • Training spec file(If have, please share here) When I...
View ArticleMultitask_classification export to onnx inference
Hello, I have trained a Multitask_classification model from a jupyter notebook in TAO getting started v5.3.0. I have exported the model in onnx format and now I want to run inference using...
View Article"No module named 'keras'" reported in running tao_byom --help
I have just followed the steps in BYOM Converter - NVIDIA Docs, however when I run “tao_byom --help” to verify the installation, the error “No module named ‘keras’” was reported, however I can import...
View ArticleVery low evaluation results for dino model by dino.ipynb in...
I tried to run the dino.ipynb in tao-getting-started_v5.3, and after 12 epochs training, the evaluation results is very very low, what’s the potential reason for it? Evaluation results: Evaluation...
View ArticleObjects are detected but bbox are shifted with the same offset
Please provide the following information when requesting support. • Hardware (T4/V100/Xavier/Nano/etc) Nano • Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc)...
View ArticleTao detectnet_v2 cost_function_config
Hi, refer to this forum there is an auto_weighting function in cost_function_config could i know how to enabled this function ? 3 posts - 2 participants Read full topic
View ArticleCan I use my own models in TAO, like a trained yolov5 or yolov8?
I have asked similar problem before, how about this problem now? (Can I use tao toolkit on my own models) 2 posts - 2 participants Read full topic
View ArticleTAO BYOM currently only supports single input and single output ONNX models...
I tried integrate my trained detectnet_v2 model (onnx) with Deepstream according to (BYOM Converter - NVIDIA Docs), My command is: tao_byom -m...
View ArticleSource code for tao toolit 3.0 tensorflow (v1.15) backend
I am able to find source code for tao toolkit 5.0 with tensorflow (v1.15) backend here: https://github.com/NVIDIA/tao_tensorflow1_backend. Can you please give the link to the source code of tao...
View ArticleFailed to convert to tensorrt engine for yolov4 model trained in TAO
Please provide the following information when requesting support. • Hardware (T4/V100/Xavier/Nano/etc) RTX 3080 • Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc)...
View ArticleInference with tensorrt engine file has different results compared with...
• Hardware - P4 • Network Type - Classification • TLT Version - nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5 • Default training configuration for TF1 classification with resnet18/resnet50 I performed...
View ArticleEtlt model conversion to onnx
Dear @Morganh I have trained 1 model on tao-4 which support etlt format but I want onnx model so that I can use that model on another platform as well. Can you please suggest how can I convert etlt...
View ArticleAction recognition preprocessing
• Hardware: NVIDIA RTX A6000 • Network Type: ActionRecognitionNet • Tao version: 5.3.0 Hello, I trained action recognition net using the notebook...
View ArticleGetting wrong result on ONNX file using cutom script. (TLT to ONNX)
Hi @Morganh , I have converted one model from tlt to onnx. But when I am performing inference then It is giving me multiple b-boxes. Please suggest where I am making mistake. Below is the code....
View ArticleError while pruning .tlt model created during efficientdet-d0 model
Please provide the following information when requesting support. • Hardware (NVIDIA RTX 3080Ti) • Network Type (Efficientdet) • TLT Version (tao-tf1) • Training Spec file training_config {...
View ArticleInference Issues with Mobilenet_v2 Custom Training Model Using TAO Toolkit in...
• Hardware - A2000 • Network Type (Classification) - Mobilenet_v2 • Docker container name - (nvcr.io/nvidia/tao/tao-toolkit:5.3.0-deploy) • How to reproduce the issue ? I have trained an image...
View ArticleChange prediction layer of pretrained model TAO
¿How can I change my prediction layer to 42 classes instead of 20 in VehicleMakeNet with TAO? • Hardware: ubuntu 20 x86 with RTX 3060 • Network Type: VehicleMakeNet (Resnet18 backbone) • TLT Version:...
View ArticleTAO dino trianing tensorboard image visualization not working
Please provide the following information when requesting support. • Hardware (T4) • Network Type : dino • Training spec file(If have, please share here) train: num_gpus: 1 num_nodes: 1...
View ArticleMaskrcnn pruning not working
Please provide the following information when requesting support. • Hardware (NVIDIA RTX 3080Ti) • Network Type (Mask_rcnn) • TLT Version (tao_tf1) • Training spec file seed: 123 use_amp: False...
View Article