Fusing the raw data from different automotive sensors for real-world environment perception is still challenging due to their different representations and data formats. In this work, we propose a novel method termed High Dimensional Frustum PointNet for 3D object detection in the context of autonomous driving. Motivated by the goals data diversity and lossless processing of the data, our deep learning approach directly and jointly uses the raw data from the camera, LiDAR, and radar. In more detail, given 2D region proposals and classification from camera images, a high dimensional convolution operator captures local features from a point cloud enhanced with color and temporal information. Radars are used as adaptive plug-in sensors to refine object detection performance. As shown by an extensive evaluation on the nuScenes 3D detection benchmark, our network outperforms most of the previous methods.
High Dimensional Frustum PointNet for 3D Object Detection from Camera, LiDAR, and Radar
2020 IEEE Intelligent Vehicles Symposium (IV) ; 1621-1628
2020-10-19
1346619 byte
Conference paper
Electronic Resource
English
HIGH DIMENSIONAL FRUSTUM POINTNET FOR 3D OBJECT DETECTION FROM CAMERA, LIDAR, AND RADAR
British Library Conference Proceedings | 2020
|TEMP-FRUSTUM NET: 3D OBJECT DETECTION WITH TEMPORAL FUSION
British Library Conference Proceedings | 2021
|FRUSTUM-DESIGNED RADAR REFLECTOR FOR ELEVATOR POSITIONING
European Patent Office | 2023
|RADAR LIDAR OBJECT DETECTION USING RADAR AND LIDAR FUSION
European Patent Office | 2023
|