Quantcast
Channel: TAO Toolkit - NVIDIA Developer Forums
Viewing all articles
Browse latest Browse all 497

Multitask_classification export to onnx inference

$
0
0

Hello,
I have trained a Multitask_classification model from a jupyter notebook in TAO getting started v5.3.0. I have exported the model in onnx format and now I want to run inference using onnxruntime. But when I tried to use the model on the validation set used in the notebook the accuracy was significantly worse (accuracy drop from 74%, 97%, 73% to just 32%, 48%, 47%). Is there any preprocessing that I need to do when running the inference outside of TAO? Or is it possible that exporting to ONNX format reduces accuracy?

• Hardware: RTX A6000
• Network Type: Resnet10
• TAO toolkit version: 5.3.0
model_config {
arch: “resnet”,
n_layers: 10

Setting these parameters to true to match the template downloaded from NGC.

use_batch_norm: true
all_projections: true
freeze_blocks: 0
input_image_size: “3,80,60”
}
training_config {
batch_size_per_gpu: 256
num_epochs: 100
checkpoint_interval: 1
learning_rate {
soft_start_cosine_annealing_schedule {
min_learning_rate: 1e-6
max_learning_rate: 1e-2
soft_start: 0.1
}
}
regularizer {
type: L1
weight: 9e-5
}
optimizer {
sgd {
momentum: 0.9
nesterov: False
}
}
pretrain_model_path: “/workspace/tao-experiments/multitask_classification/pretrained_resnet10/pretrained_classification_vresnet10/resnet_10.hdf5”
}
dataset_config {
train_csv_path: “/workspace/tao-experiments/data/myntradataset/train.csv”
val_csv_path: “/workspace/tao-experiments/data/myntradataset/val.csv”
image_directory_path: “/workspace/tao-experiments/data/myntradataset/images”
}

1 post - 1 participant

Read full topic


Viewing all articles
Browse latest Browse all 497

Trending Articles