Quantcast
Channel: TAO Toolkit - NVIDIA Developer Forums
Viewing all articles
Browse latest Browse all 537

Direct export/inference TAO CV models using TAO API

$
0
0

Hi,

I am attempting to explore ways to directly inference the data using pretrained TAO models.

Specifically, using the files provided via NGC-CLI (tao-getting-started_v5.2.0/notebooks/tao_api_starter_kit/api/object_detection.ipynb) hosted via TAO-API hosted on AWS, I’m using a lpdnet:unpruned_v1.0 PTM and attach inference_dataset to the created model (via POST request).

However, upon exporting via following POST command,

actions = ["export"]
# original: 
# data = json.dumps({"job":parent,"actions":actions,"parent_id":model_id,"parent_job_type":"model"})
# actual: 
data = json.dumps({"actions":actions, "parent_id":model_id, "parent_job_type":"model"})

the following occurred,

{
  "action": "export",
  "created_on": "2024-02-08T03:21:15.860367",
  "id": "38d9****-****-**c6-****-18ba134****6",
  "last_modified": "2024-02-08T03:21:16.301992",
  "parent_id": null,
  "result": {
    "detailed_status": {
      "message": "Error due to unmet dependencies"
    }
  },
  "status": "Error"
}

I’m also exploring ways that I can export the TRT engine so I can host the pre-trained model via a triton server, any general guidance to this would be appreciated.

Thanks.

2 posts - 2 participants

Read full topic


Viewing all articles
Browse latest Browse all 537

Trending Articles