Quantcast
Channel: TAO Toolkit - NVIDIA Developer Forums
Viewing all articles
Browse latest Browse all 497

Failed to decode TrafficCamNet from etlt to ONNX

$
0
0

Please provide the following information when requesting support.

• Hardware - GeForce RTX 3050
• Network Type - TrafficCamNet
• TLT Version - nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5
• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

I followed the advice shared Fpenet retraining output file onnx but deepstream is using tlt - #12 by Morganh for decoding an etlt file to onnx.

steps to reproduce

  1. downloaded the etlt file using
wget --content-disposition 'https://api.ngc.nvidia.com/v2/models/org/nvidia/team/tao/trafficcamnet/pruned_v1.0.3/files?redirect=true&path=resnet18_trafficcamnet_pruned.etlt' -O resnet18_trafficcamnet_pruned.etlt
  1. started the container and mounted the folder with the file and a following script
import argparse
import struct
# import encoding
from nvidia_tao_tf1.encoding import encoding

def parse_command_line(args):
    '''Parse command line arguments.'''
    parser = argparse.ArgumentParser(description='ETLT Decode Tool')
    parser.add_argument('-m',
                        '--model',
                        type=str,
                        required=True,
                        help='Path to the etlt file.')
    parser.add_argument('-o',
                        '--uff',
                        required=True,
                        type=str,
                        help='The path to the uff file.')
    parser.add_argument('-k',
                        '--key',
                        required=True,
                        type=str,
                        help='encryption key.')
    return parser.parse_args(args)


def decode(tmp_etlt_model, tmp_uff_model, key):
    with open(tmp_uff_model, 'wb') as temp_file, open(tmp_etlt_model, 'rb') as encoded_file:
        size = encoded_file.read(4)
        size = struct.unpack("<i", size)[0]
        input_node_name = encoded_file.read(size)
        encoding.decode(encoded_file, temp_file, key.encode())


def main(args=None):
    args = parse_command_line(args)
    decode(args.model, args.uff, args.key)
    print("Decode successfully.")


if __name__ == "__main__":
    main()
  1. decoded the etlt file using the command
python decode_etlt.py -m trafficcamnet/resnet18_trafficcamnet_pruned.etlt -o trafficcamnet/trafficcamnet.onnx -k tlt_encode

which printed
Decode successfully.

  1. started a python console which has onnx_runtime installed and run the commands
import onnxruntime as ort
trafficcamnet_path = "trafficcamnet/trafficcamnet.onnx"
session = ort.InferenceSession(trafficcamnet_path)

which resulted in the following error

---------------------------------------------------------------------------
Fail                                      Traceback (most recent call last)
Cell In[21], line 2
      1 trafficcamnet_path = "trafficcamnet/trafficcamnet.onnx"
----> 2 session = ort.InferenceSession(trafficcamnet_path)

File ~/.local/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:419, in InferenceSession.__init__(self, path_or_bytes, sess_options, providers, provider_options, **kwargs)
    416 disabled_optimizers = kwargs["disabled_optimizers"] if "disabled_optimizers" in kwargs else None
    418 try:
--> 419     self._create_inference_session(providers, provider_options, disabled_optimizers)
    420 except (ValueError, RuntimeError) as e:
    421     if self._enable_fallback:

File ~/.local/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:452, in InferenceSession._create_inference_session(self, providers, provider_options, disabled_optimizers)
    450 session_options = self._sess_options if self._sess_options else C.get_default_session_options()
    451 if self._model_path:
--> 452     sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
    453 else:
    454     sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)

Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from trafficcamnet/trafficcamnet.onnx failed:/onnxruntime_src/onnxruntime/core/graph/model.cc:134 onnxruntime::Model::Model(onnx::ModelProto&&, const PathString&, const IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) ModelProto does not have a graph.

Is there something missing ? is there a limitation on decoding pruned models?

9 posts - 2 participants

Read full topic


Viewing all articles
Browse latest Browse all 497

Trending Articles