Please provide the following information when requesting support.
• Hardware (T4/V100/Xavier/Nano/etc) RTX3090
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc) YoloV8, BodyPose3d, PoseClassification
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)
I am testing by changing only the video based on the example code here, and the values for standing and walking are very high in the sitting image. Is there any way to solve this?
What type of data sets do you have for learning?
sitting_down, 0.000859, getting_up, 0.001437, sitting, 0.009407, standing, 0.460987, walking, 0.502175, jumping, 0.025136,
sitting_down, 0.000857, getting_up, 0.001434, sitting, 0.009399, standing, 0.462083, walking, 0.501098, jumping, 0.025130,
sitting_down, 0.000854, getting_up, 0.001430, sitting, 0.009399, standing, 0.463774, walking, 0.499422, jumping, 0.025121,
sitting_down, 0.000852, getting_up, 0.001427, sitting, 0.009392, standing, 0.464831, walking, 0.498382, jumping, 0.025117,
sitting_down, 0.000849, getting_up, 0.001423, sitting, 0.009376, standing, 0.465962, walking, 0.497277, jumping, 0.025112,
sitting_down, 0.000846, getting_up, 0.001420, sitting, 0.009368, standing, 0.467028, walking, 0.496231, jumping, 0.025107,
sitting_down, 0.000844, getting_up, 0.001417, sitting, 0.009367, standing, 0.468695, walking, 0.494580, jumping, 0.025097,
sitting_down, 0.000841, getting_up, 0.001414, sitting, 0.009360, standing, 0.469728, walking, 0.493565, jumping, 0.025092,
sitting_down, 0.000838, getting_up, 0.001410, sitting, 0.009343, standing, 0.470849, walking, 0.492473, jumping, 0.025088,
5 posts - 3 participants