Quantcast
Channel: TAO Toolkit - NVIDIA Developer Forums
Viewing all articles
Browse latest Browse all 537

Using nvidia grounding dino in application

$
0
0

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc) : A40
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc) : Grounding Dino
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

I want to use grounding dino model available in nvidia in my applications, as per the model card, what I understood is that we can not use this model https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/models/grounding_dino
in deepstream.

Basically my usecase is that I have to perform inference using this grounding dino model on live RTSP stream, SO basically I want to input the Prompt(text) and the Frame and do the inference and get the final output.

Is there a way to deploy this model locally and is there any sample script available to run inference.

Please suggest other ways also if possible.

2 posts - 2 participants

Read full topic


Viewing all articles
Browse latest Browse all 537

Trending Articles