Coming from the (now defunct) TF object detection API, I found TAO familiar and easy to use because of the similarities in the two frameworks; one great feature of the former was the possibility of creating custom metrics to better understand particular aspects of our models. Everything was open source so there was no limit to what could be logged and customised; in TAO it seems, beside being closed source, the tensorboard integration has almost no configurability (I’ve tested YOLOV4 but guessing from the docs all the other supported models are very similar)
Is there any plan to add some features to enable more customization? as other users pointed out for example, one great feature of the TF OD API was the side-by-side visualization of ground-truth vs inference boxes of the evaluations. Right now in TAO we only have the inference boxes, all in black color, no class text shown (how are we supposed to tell if a given box represents a correct classification or not?)
I understand that the closed source nature of the product does not allow for contributions to be made, otherwise I’m sure many users would be glad to help. Maybe it’s not the case for many, but In some applications having the possibility to create custom metrics and visualizations is quite a booster of both productivity and performance.
3 posts - 2 participants