Learning Pseudo 3D Representation for Ego-centric 2D Multiple Object Tracking

Anonymous Authors

Abstract

Data association is a knotty problem for 2D Multiple Object Tracking due to the object occlusion. However, in 3D space, data association is not so hard. Only with a 3D Kalman Filter, the online object tracker can associate the detections from LiDAR. In this paper, we rethink the data association in 2D MOT and utilize the 3D object representation to separate each object in the feature space. Unlike the existing depth-based MOT methods, the 3D object representation can be jointly learned with the object association module. Besides, the object’s 3D representation is learned from the video and supervised by the 2D tracking labels without additional manual annotations from LiDAR or pretrained depth estimator. With 3D object representation learning from Pseudo 3D (P3D) object labels in monocular videos, we propose a new 2D MOT paradigm, called P3DTrack. Extensive experiments show the effectiveness of our method. We achieve state-of-the-art performance on the ego-centric datasets, KITTI and Waymo Open Dataset (WOD). Code will be released.

Video & Demo

(1) Tracking results on Waymo open dataset (WOD)

(2) Tracking results on KITTI dataset

(3) Tracking results for Pedestrian class

(4) Failure cases

The common failure case of our method is the occluded scene, like parking lot, distant objects, occluded objects very close to the camera. The 3D location for these cases in naturally hard to estimate.

(5) At night

We show the tracking results at night. When the visibility is low at night, the SfM system will be negatively affected. However, it only slightly affects pseudo label generation. The network learning 2D boxes and 3D representation is less affected.