Please provide the following information when requesting support.
• Hardware - GeForce RTX 3050
• Network Type - FaceDetect
• TLT Version - nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5
• Training spec file (If have, please share here)
• How to reproduce the issue? (This is for errors. Please share the command line and the detailed log here.)
Steps to reproduce
- Download FaceDetect pruned_v2.0 from FaceDetect | NVIDIA NGC
- Create decode_etlt.py for .etlt to .onnx conversion
import argparse
import struct
from nvidia_tao_tf1.encoding import encoding
parser = argparse.ArgumentParser(description='ETLT Decode Tool')
parser.add_argument('-m',
'--model',
type=str,
required=True,
help='Path to the etlt file.')
parser.add_argument('-o',
'--uff',
required=True,
type=str,
help='The path to the uff file.')
parser.add_argument('-k',
'--key',
required=True,
type=str,
help='encryption key.')
args = parser.parse_args()
print(args)
with open(args.uff, 'wb') as temp_file, open(args.model, 'rb') as encoded_file:
size = encoded_file.read(4)
size = struct.unpack("<i", size)[0]
input_node_name = encoded_file.read(size)
encoding.decode(encoded_file, temp_file, args.key.encode())
print("Decode successfully.")
- docker run --runtime=nvidia -it --rm -v <local_dir>:<mapped_dir> nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5 /bin/bash
- python decode_etlt.py -m model.etlt -o model.onnx -k nvidia_tlt
- Exit docker
- Load onnx model
import onnx
onnx_model = onnx.load("/home/tao_tutorials/model.onnx")
Error:
3 posts - 2 participants