Hi,
For TAO API, we’ve been following this notebook for object detection on JPG and PNG images.
After deploying TAO API to an EKS cluster, I’ve constructed a KITTI PNG format dataset (train, val, images, labels), successfully uploaded it to the TAO server, run TFrecords conversion, and then various TAO YOLOv4 experiment actions (train, export, etc.).
My questions:
Once the imagery dataset is uploaded to the TAO API server:
- Using the TAO API, how can we individually fetch – and display – single images from the uploaded + converted dataset? e.g., If I have 1000 KITTI PNG images + labels that are in TAO dataset format on my EKS cluster, how do I fetch the image + label(s) for image #1, image #2, etc?
- My use case is: I’m building a frontend UI that allows interaction with the TAO API, and would like to be able to embed images in the browser for viewing, after they’ve been uploaded to TAO via the
/datasets/{dataset_id}:upload
API call.
- My use case is: I’m building a frontend UI that allows interaction with the TAO API, and would like to be able to embed images in the browser for viewing, after they’ve been uploaded to TAO via the
- How can we visualize the results of
inference
TAO API calls on a test imagery dataset? I see a way to download the results of aDone
inference job, but. Is there a way to use the TAO API to fetch the inference output’s image + detection bounding boxes, and display them?- I see how the TAO launcher YOLOv4 notebook does this under the Visualize inferences section, outputting images + KITTI bbox labels. I’d like to achieve the same via API.
Thank you!
2 posts - 2 participants