Quantcast
Channel: TAO Toolkit - NVIDIA Developer Forums
Viewing all articles
Browse latest Browse all 497

Modifying (obfuscated) code inside tao toolkit docker

$
0
0

I have trained centerpose end-to-end on my custom data using synthetic data generated by Isaac Sim (older version 2023.1.1) using TAO Toolkit docker 5.2 centerpose model card.

I am able to use tao toolkit 5.2 for inference on images in a folder. However, I need to get inference on live camera feed. I tried to modify the code inside the script/inference.py but authors have used pyarmor_runtime and have obfuscated the code.

Could you please provide a guideline to get inference on live camera feed? My camera is Intel RealSense D435.

(base) mona@ada:~$ tao info
Configuration of the TAO Toolkit Instance
task_group: ['model', 'dataset', 'deploy']
format_version: 3.0
toolkit_version: 5.2.0.1
published_date: 01/16/2024
(base) mona@ada:~$ docker ps
CONTAINER ID   IMAGE                                                     COMMAND                  CREATED       STATUS       PORTS                                                                                                                             NAMES
1568b9368e74   nvcr.io/nvidia/tao/tao-toolkit:5.2.0-pyt2.1.0             "/opt/nvidia/nvidia_…"   7 hours ago   Up 7 hours                                                                
(base) mona@ada:~$ docker exec -it cranky_khayyam bash
groups: cannot find name for group ID 1002
I have no name!@ada:/opt/nvidia/tools$ ls
Jenkinsfile.develop  Jenkinsfile.main  README.md  README.txt  build.sh	converter  tao-converter
I have no name!@ada:/opt/nvidia/tools$ 
I have no name!@ada:/opt/nvidia/tools$ find / -name "centerpose" -type d 2> /dev/null
/usr/local/lib/python3.10/dist-packages/nvidia_tao_pytorch/cv/centerpose
/workspace/tao-experiments/centerpose
/workspace/tao-experiments/data/centerpose

I also wrote this as an issue in [QST] Modifying code inside tao docker · Issue #35 · NVIDIA/tao_pytorch_backend · GitHub but seems not maintained as regularly.

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc)
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc)
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

1 post - 1 participant

Read full topic


Viewing all articles
Browse latest Browse all 497

Trending Articles