This paper presents a novel approach for joint object detection and orientation estimation in a single deep convolutional neural network utilizing proposals calculated from 3D data. For orientation estimation, we extend a R-CNN like architecture by several carefully designed layers. Two new object proposal methods are introduced, to make use of stereo as well as lidar data. Our experiments on the KITTI dataset show that by combining proposals of both domains, high recall can be achieved while keeping the number of proposals low. Furthermore, our method for joint detection and orientation estimation outperforms state of the art approaches for cyclists on the easy test scenario of the KITTI test dataset.
Pose-RCNN: Joint object detection and pose estimation using 3D object proposals
2016-11-01
2716789 byte
Conference paper
Electronic Resource
English
Factorization of view-object manifolds for joint object recognition and pose estimation
British Library Online Contents | 2015
|Viewpoint-aware object detection and continuous pose estimation
British Library Online Contents | 2012
|