I’ve been trying out 2 different networks on TAO toolkit: PointPillars and CenterPose.
PointPillars takes point clouds data and KITTI-formatted annotations as the inputs while CenterPose takes a 2D image and a .json file containing necessary information for training and the intrinsic matrix of the camera is also required.
I’m thinking about the possibility of training both networks for the purpose of 3D virtual fences, in which people or some other certain objects such as cars need to be annotated.
-
Currently I’ve downloaded some open dataset for 3D object detection aside from KITTI dataset, if I wanna add them into the training set, conversion is inevitable. What are the things that need to be taken care of when doing this?
-
It seems that currently there’s only Objectron dataset containing only 8 classes. Is there any annotation tool for creating my own dataset that’s to be used for training CenterPose?
1 post - 1 participant